946 resultados para Fundamentals in linear algebra
Resumo:
BACKGROUNDS: Cyclophosphamide and high-dose steroids have been used as limited induction therapy in progressive IgA nephropathy (IgAN) to reduce the loss of renal function and proteinuria. We evaluated the effect of cyclophosphamide pulses (CyP) and mycophenolic acid (MPA) as sequential therapy on renal function in patients with progressive IgAN. METHODS: Twenty patients with progressive IgAN and advanced renal failure (median GFR 22 ml/min per 1.73 m2) and further disease activity (triangle downGFR -0.8 ml/min per month) after cyclophosphamide (CyP; n = 18) or steroid pulse therapy (n = 2) were treated with mycophenolate mofetil 1 g per day for a median of 27 months. RESULTS: The monthly loss of renal function was significantly reduced in linear regression analysis from -2.4 ml/min before CyP to -0.12 ml/min with CyP/MPA (p = 0.0009). Estimated renal survival time was significantly prolonged by a median of 65 months (p = 0.0014). Proteinuria decreased significantly from 1.7 to 0.4 g/l during MPA treatment (p = 0.015). In Cox regression analysis, only proteinuria >1.0 g/l was an independent risk factor for doubling of creatinine during CyP/MPA treatment (p = 0.03). CONCLUSION: A sequential therapy with CyP/MPA may arrest or slow down the loss of renal function and reduces proteinuria even in patients who passed the so called 'point of no return' with progressive IgAN.
Resumo:
Fuzzy community detection is to identify fuzzy communities in a network, which are groups of vertices in the network such that the membership of a vertex in one community is in [0,1] and that the sum of memberships of vertices in all communities equals to 1. Fuzzy communities are pervasive in social networks, but only a few works have been done for fuzzy community detection. Recently, a one-step forward extension of Newman’s Modularity, the most popular quality function for disjoint community detection, results into the Generalized Modularity (GM) that demonstrates good performance in finding well-known fuzzy communities. Thus, GMis chosen as the quality function in our research. We first propose a generalized fuzzy t-norm modularity to investigate the effect of different fuzzy intersection operators on fuzzy community detection, since the introduction of a fuzzy intersection operation is made feasible by GM. The experimental results show that the Yager operator with a proper parameter value performs better than the product operator in revealing community structure. Then, we focus on how to find optimal fuzzy communities in a network by directly maximizing GM, which we call it Fuzzy Modularity Maximization (FMM) problem. The effort on FMM problem results into the major contribution of this thesis, an efficient and effective GM-based fuzzy community detection method that could automatically discover a fuzzy partition of a network when it is appropriate, which is much better than fuzzy partitions found by existing fuzzy community detection methods, and a crisp partition of a network when appropriate, which is competitive with partitions resulted from the best disjoint community detections up to now. We address FMM problem by iteratively solving a sub-problem called One-Step Modularity Maximization (OSMM). We present two approaches for solving this iterative procedure: a tree-based global optimizer called Find Best Leaf Node (FBLN) and a heuristic-based local optimizer. The OSMM problem is based on a simplified quadratic knapsack problem that can be solved in linear time; thus, a solution of OSMM can be found in linear time. Since the OSMM algorithm is called within FBLN recursively and the structure of the search tree is non-deterministic, we can see that the FMM/FBLN algorithm runs in a time complexity of at least O (n2). So, we also propose several highly efficient and very effective heuristic algorithms namely FMM/H algorithms. We compared our proposed FMM/H algorithms with two state-of-the-art community detection methods, modified MULTICUT Spectral Fuzzy c-Means (MSFCM) and Genetic Algorithm with a Local Search strategy (GALS), on 10 real-world data sets. The experimental results suggest that the H2 variant of FMM/H is the best performing version. The H2 algorithm is very competitive with GALS in producing maximum modularity partitions and performs much better than MSFCM. On all the 10 data sets, H2 is also 2-3 orders of magnitude faster than GALS. Furthermore, by adopting a simply modified version of the H2 algorithm as a mutation operator, we designed a genetic algorithm for fuzzy community detection, namely GAFCD, where elite selection and early termination are applied. The crossover operator is designed to make GAFCD converge fast and to enhance GAFCD’s ability of jumping out of local minimums. Experimental results on all the data sets show that GAFCD uncovers better community structure than GALS.
Resumo:
This paper introduces and analyzes a stochastic search method for parameter estimation in linear regression models in the spirit of Beran and Millar [Ann. Statist. 15(3) (1987) 1131–1154]. The idea is to generate a random finite subset of a parameter space which will automatically contain points which are very close to an unknown true parameter. The motivation for this procedure comes from recent work of Dümbgen et al. [Ann. Statist. 39(2) (2011) 702–730] on regression models with log-concave error distributions.
Resumo:
We prove large deviation results for sums of heavy-tailed random elements in rather general convex cones being semigroups equipped with a rescaling operation by positive real numbers. In difference to previous results for the cone of convex sets, our technique does not use the embedding of cones in linear spaces. Examples include the cone of convex sets with the Minkowski addition, positive half-line with maximum operation and the family of square integrable functions with arithmetic addition and argument rescaling.
Resumo:
We solve two inverse spectral problems for star graphs of Stieltjes strings with Dirichlet and Neumann boundary conditions, respectively, at a selected vertex called root. The root is either the central vertex or, in the more challenging problem, a pendant vertex of the star graph. At all other pendant vertices Dirichlet conditions are imposed; at the central vertex, at which a mass may be placed, continuity and Kirchhoff conditions are assumed. We derive conditions on two sets of real numbers to be the spectra of the above Dirichlet and Neumann problems. Our solution for the inverse problems is constructive: we establish algorithms to recover the mass distribution on the star graph (i.e. the point masses and lengths of subintervals between them) from these two spectra and from the lengths of the separate strings. If the root is a pendant vertex, the two spectra uniquely determine the parameters on the main string (i.e. the string incident to the root) if the length of the main string is known. The mass distribution on the other edges need not be unique; the reason for this is the non-uniqueness caused by the non-strict interlacing of the given data in the case when the root is the central vertex. Finally, we relate of our results to tree-patterned matrix inverse problems.
Resumo:
Second-generation antipsychotics (SGAs) are increasingly prescribed to treat psychiatric symptoms in pediatric patients infected with HIV. We examined the relationship between prescribed SGAs and physical growth in a cohort of youth with perinatally acquired HIV-1 infection. Pediatric AIDS Clinical Trials Group (PACTG), Protocol 219C (P219C), a multicenter, longitudinal observational study of children and adolescents perinatally exposed to HIV, was conducted from September 2000 until May 2007. The analysis included P219C participants who were perinatally HIV-infected, 3-18 years old, prescribed first SGA for at least 1 month, and had available baseline data prior to starting first SGA. Each participant prescribed an SGA was matched (based on gender, age, Tanner stage, baseline body mass index [BMI] z score) with 1-3 controls without antipsychotic prescriptions. The main outcomes were short-term (approximately 6 months) and long-term (approximately 2 years) changes in BMI z scores from baseline. There were 236 participants in the short-term and 198 in the long-term analysis. In linear regression models, youth with SGA prescriptions had increased BMI z scores relative to youth without antipsychotic prescriptions, for all SGAs (short-term increase = 0.192, p = 0.003; long-term increase = 0.350, p < 0.001), and for risperidone alone (short-term = 0.239, p = 0.002; long-term = 0.360, p = 0.001). Participants receiving both protease inhibitors (PIs) and SGAs showed especially large increases. These findings suggest that growth should be carefully monitored in youth with perinatally acquired HIV who are prescribed SGAs. Future research should investigate the interaction between PIs and SGAs in children and adolescents with perinatally acquired HIV infection.
Resumo:
Charles Taylor’s contribution to social imaginaries offers an interpretive framework for better understanding modernity as secularity. One of its main aspects is conceiving of human society in linear, homogenous time (secular time). Looking into the Arabic intellectual tradition, I will argue in my paper that Taylor’s framework can help us understand major social and intellectual transformations. The Ottoman and Arabic modernization process during the 19th century has often been understood by focusing on certain core concepts. One of these is tamaddun, usually translated as “civilization.” I will be mostly talking about the works of two “pioneers” of Arab modernity (which is traditionally referred to as an-nahḍa, the so-called Arab Renaissance): the Syrian Fransīs Marrāsh and the Egyptian Rifāʿa aṭ-Ṭahṭāwī. First I will focus on Marrāsh’s didactic novel “The Forest of Truth” (1865), as it offers a complex view of tamaddun, which has sometimes been construed as merely a social and political reform program. The category of "social imaginary,” however, is useful in grasping the wider semantic scope of this concept, which is reading it as a signifier for human history conceived of in secular time, as Taylor defines it. This conceptualization of human history functioning within the immanent frame can also be observed in the introduction to “The Extraction of Pure Gold in the Description of Paris” (1834), a systematic account of a travel experience in France that was written by the other “pioneer,” aṭ-Ṭahṭāwī. Finally, in translating tamaddun as “the modern social imaginary of civilization/culture,” the talk aims to consider this imaginary as a major factor in the emergence of the “secular age.” Furthermore, it suggests the importance of studying (quasi-) literary texts, such as historiographical, geographical, and self-narratives in the Arabic literary tradition, in order to further elaborate continuities and ruptures in social imaginaries.
Resumo:
BACKGROUND Anxiety disorders have been linked to an increased risk of incident coronary heart disease in which inflammation plays a key pathogenic role. To date, no studies have looked at the association between proinflammatory markers and agoraphobia. METHODS In a random Swiss population sample of 2890 persons (35-67 years, 53% women), we diagnosed a total of 124 individuals (4.3%) with agoraphobia using a validated semi-structured psychiatric interview. We also assessed socioeconomic status, traditional cardiovascular risk factors (i.e., body mass index, hypertension, blood glucose levels, total cholesterol/high-density lipoprotein-cholesterol ratio), and health behaviors (i.e., smoking, alcohol consumption, and physical activity), and other major psychiatric diseases (other anxiety disorders, major depressive disorder, drug dependence) which were treated as covariates in linear regression models. Circulating levels of inflammatory markers, statistically controlled for the baseline demographic and health-related measures, were determined at a mean follow-up of 5.5 ± 0.4 years (range 4.7 - 8.5). RESULTS Individuals with agoraphobia had significantly higher follow-up levels of C-reactive protein (p = 0.007) and tumor-necrosis-factor-α (p = 0.042) as well as lower levels of the cardioprotective marker adiponectin (p = 0.032) than their non-agoraphobic counterparts. Follow-up levels of interleukin (IL)-1β and IL-6 did not significantly differ between the two groups. CONCLUSIONS Our results suggest an increase in chronic low-grade inflammation in agoraphobia over time. Such a mechanism might link agoraphobia with an increased risk of atherosclerosis and coronary heart disease, and needs to be tested in longitudinal studies.
Resumo:
The relationship between change in myocardial infarction (MI) mortality rate (ICD codes 410, 411) and change in use of percutaneous transluminal coronary angioplasty (PTCA), adjusted for change in hospitalization rates for MI, and for change in use of aortocoronary bypass surgery (ACBS) from 1985 through 1990 at private hospitals was examined in the biethnic community of Nueces County, Texas, site of the Corpus Christi Heart Project, a major coronary heart disease (CHD) surveillance program. Age-adjusted rates (per 100,000 persons) were calculated for each of these CHD events for the population aged 25 through 74 years and for each of the four major sex-ethnic groups: Mexican-American and Non-Hispanic White women and men. Over this six year period, there were 541 MI deaths, 2358 MI hospitalizations, 816 PTCA hospitalizations, and 920 ACBS hospitalizations among Mexican-American and Non-Hispanic White Nueces County residents. Acute MI mortality decreased from 24.7 in the first quarter of 1985 to 12.1 in the fourth quarter of 1990, a 51.2% decrease. All three hospitalization rates increased: The MI hospitalization rates increased from 44.1 to 61.3, a 38.9% increase, PTCA use increased from 7.1 to 23.2, a 228.0% increase, and ACBS use increased from 18.8 to 29.5, a 56.6% increase. In linear regression analyses, the change in MI mortality rate was negatively associated with the change in PTCA use (beta = $-$.266 $\pm$.103, p = 0.017) but was not associated with the changes in MI hospitalization rate and in ACBS use. The results of this ecologic research support the idea that the increasing use of PTCA, but not ACBS, has been associated with decreases in MI mortality. The contrast in associations between these two revascularization procedures and MI mortality highlights the need for research aimed at clarifying the proper roles of these procedures in the treatment of patients with CHD. The association between change in PTCA use and change in MI mortality supports the idea that some changes in medical treatment may be partially responsible for trends in CHD mortality. Differences in the use of therapies such as PTCA may be related to differences between geographical sites in CHD rates and trends. ^
Resumo:
Life expectancy has consistently increased over the last 150 years due to improvements in nutrition, medicine, and public health. Several studies found that in many developed countries, life expectancy continued to rise following a nearly linear trend, which was contrary to a common belief that the rate of improvement in life expectancy would decelerate and was fit with an S-shaped curve. Using samples of countries that exhibited a wide range of economic development levels, we explored the change in life expectancy over time by employing both nonlinear and linear models. We then observed if there were any significant differences in estimates between linear models, assuming an auto-correlated error structure. When data did not have a sigmoidal shape, nonlinear growth models sometimes failed to provide meaningful parameter estimates. The existence of an inflection point and asymptotes in the growth models made them inflexible with life expectancy data. In linear models, there was no significant difference in the life expectancy growth rate and future estimates between ordinary least squares (OLS) and generalized least squares (GLS). However, the generalized least squares model was more robust because the data involved time-series variables and residuals were positively correlated. ^
Resumo:
Resumen El diseño clásico de circuitos de microondas se basa fundamentalmente en el uso de los parámetros s, debido a su capacidad para caracterizar de forma exitosa el comportamiento de cualquier circuito lineal. La relación existente entre los parámetros s con los sistemas de medida actuales y con las herramientas de simulación lineal han facilitado su éxito y su uso extensivo tanto en el diseño como en la caracterización de circuitos y subsistemas de microondas. Sin embargo, a pesar de la gran aceptación de los parámetros s en la comunidad de microondas, el principal inconveniente de esta formulación reside en su limitación para predecir el comportamiento de sistemas no lineales reales. En la actualidad, uno de los principales retos de los diseñadores de microondas es el desarrollo de un contexto análogo que permita integrar tanto el modelado no lineal, como los sistemas de medidas de gran señal y los entornos de simulación no lineal, con el objetivo de extender las capacidades de los parámetros s a regímenes de operación en gran señal y por tanto, obtener una infraestructura que permita tanto la caracterización como el diseño de circuitos no lineales de forma fiable y eficiente. De acuerdo a esta filosofía, en los últimos años se han desarrollado diferentes propuestas como los parámetros X, de Agilent Technologies, o el modelo de Cardiff que tratan de proporcionar esta plataforma común en el ámbito de gran señal. Dentro de este contexto, uno de los objetivos de la presente Tesis es el análisis de la viabilidad del uso de los parámetros X en el diseño y simulación de osciladores para transceptores de microondas. Otro aspecto relevante en el análisis y diseño de circuitos lineales de microondas es la disposición de métodos analíticos sencillos, basados en los parámetros s del transistor, que permitan la obtención directa y rápida de las impedancias de carga y fuente necesarias para cumplir las especificaciones de diseño requeridas en cuanto a ganancia, potencia de salida, eficiencia o adaptación de entrada y salida, así como la determinación analítica de parámetros de diseño clave como el factor de estabilidad o los contornos de ganancia de potencia. Por lo tanto, el desarrollo de una formulación de diseño analítico, basada en los parámetros X y similar a la existente en pequeña señal, permitiría su uso en aplicaciones no lineales y supone un nuevo reto que se va a afrontar en este trabajo. Por tanto, el principal objetivo de la presente Tesis consistiría en la elaboración de una metodología analítica basada en el uso de los parámetros X para el diseño de circuitos no lineales que jugaría un papel similar al que juegan los parámetros s en el diseño de circuitos lineales de microondas. Dichos métodos de diseño analíticos permitirían una mejora significativa en los actuales procedimientos de diseño disponibles en gran señal, así como una reducción considerable en el tiempo de diseño, lo que permitiría la obtención de técnicas mucho más eficientes. Abstract In linear world, classical microwave circuit design relies on the s-parameters due to its capability to successfully characterize the behavior of any linear circuit. Thus the direct use of s-parameters in measurement systems and in linear simulation analysis tools, has facilitated its extensive use and success in the design and characterization of microwave circuits and subsystems. Nevertheless, despite the great success of s-parameters in the microwave community, the main drawback of this formulation is its limitation in the behavior prediction of real non-linear systems. Nowadays, the challenge of microwave designers is the development of an analogue framework that allows to integrate non-linear modeling, large-signal measurement hardware and non-linear simulation environment in order to extend s-parameters capabilities to non-linear regimen and thus, provide the infrastructure for non-linear design and test in a reliable and efficient way. Recently, different attempts with the aim to provide this common platform have been introduced, as the Cardiff approach and the Agilent X-parameters. Hence, this Thesis aims to demonstrate the X-parameter capability to provide this non-linear design and test framework in CAD-based oscillator context. Furthermore, the classical analysis and design of linear microwave transistorbased circuits is based on the development of simple analytical approaches, involving the transistor s-parameters, that are able to quickly provide an analytical solution for the input/output transistor loading conditions as well as analytically determine fundamental parameters as the stability factor, the power gain contours or the input/ output match. Hence, the development of similar analytical design tools that are able to extend s-parameters capabilities in small-signal design to non-linear ap- v plications means a new challenge that is going to be faced in the present work. Therefore, the development of an analytical design framework, based on loadindependent X-parameters, constitutes the core of this Thesis. These analytical nonlinear design approaches would enable to significantly improve current large-signal design processes as well as dramatically decrease the required design time and thus, obtain more efficient approaches.
Resumo:
The present contribution discusses the development of a PSE-3D instability analysis algorithm, in which a matrix forming and storing approach is followed. Alternatively to the typically used in stability calculations spectral methods, new stable high-order finitedifference-based numerical schemes for spatial discretization 1 are employed. Attention is paid to the issue of efficiency, which is critical for the success of the overall algorithm. To this end, use is made of a parallelizable sparse matrix linear algebra package which takes advantage of the sparsity offered by the finite-difference scheme and, as expected, is shown to perform substantially more efficiently than when spectral collocation methods are used. The building blocks of the algorithm have been implemented and extensively validated, focusing on classic PSE analysis of instability on the flow-plate boundary layer, temporal and spatial BiGlobal EVP solutions (the latter necessary for the initialization of the PSE-3D), as well as standard PSE in a cylindrical coordinates using the nonparallel Batchelor vortex basic flow model, such that comparisons between PSE and PSE-3D be possible; excellent agreement is shown in all aforementioned comparisons. Finally, the linear PSE-3D instability analysis is applied to a fully three-dimensional flow composed of a counter-rotating pair of nonparallel Batchelor vortices.
Resumo:
El objetivo de esta Tesis ha sido la consecución de simulaciones en tiempo real de vehículos industriales modelizados como sistemas multicuerpo complejos formados por sólidos rígidos. Para el desarrollo de un programa de simulación deben considerarse cuatro aspectos fundamentales: la modelización del sistema multicuerpo (tipos de coordenadas, pares ideales o impuestos mediante fuerzas), la formulación a utilizar para plantear las ecuaciones diferenciales del movimiento (coordenadas dependientes o independientes, métodos globales o topológicos, forma de imponer las ecuaciones de restricción), el método de integración numérica para resolver estas ecuaciones en el tiempo (integradores explícitos o implícitos) y finalmente los detalles de la implementación realizada (lenguaje de programación, librerías matemáticas, técnicas de paralelización). Estas cuatro etapas están interrelacionadas entre sí y todas han formado parte de este trabajo. Desde la generación de modelos de una furgoneta y de camión con semirremolque, el uso de tres formulaciones dinámicas diferentes, la integración de las ecuaciones diferenciales del movimiento mediante métodos explícitos e implícitos, hasta el uso de funciones BLAS, de técnicas de matrices sparse y la introducción de paralelización para utilizar los distintos núcleos del procesador. El trabajo presentado en esta Tesis ha sido organizado en 8 capítulos, dedicándose el primero de ellos a la Introducción. En el Capítulo 2 se presentan dos formulaciones semirrecursivas diferentes, de las cuales la primera está basada en una doble transformación de velocidades, obteniéndose las ecuaciones diferenciales del movimiento en función de las aceleraciones relativas independientes. La integración numérica de estas ecuaciones se ha realizado con el método de Runge-Kutta explícito de cuarto orden. La segunda formulación está basada en coordenadas relativas dependientes, imponiendo las restricciones por medio de penalizadores en posición y corrigiendo las velocidades y aceleraciones mediante métodos de proyección. En este segundo caso la integración de las ecuaciones del movimiento se ha llevado a cabo mediante el integrador implícito HHT (Hilber, Hughes and Taylor), perteneciente a la familia de integradores estructurales de Newmark. En el Capítulo 3 se introduce la tercera formulación utilizada en esta Tesis. En este caso las uniones entre los sólidos del sistema se ha realizado mediante uniones flexibles, lo que obliga a imponer los pares por medio de fuerzas. Este tipo de uniones impide trabajar con coordenadas relativas, por lo que la posición del sistema y el planteamiento de las ecuaciones del movimiento se ha realizado utilizando coordenadas Cartesianas y parámetros de Euler. En esta formulación global se introducen las restricciones mediante fuerzas (con un planteamiento similar al de los penalizadores) y la estabilización del proceso de integración numérica se realiza también mediante proyecciones de velocidades y aceleraciones. En el Capítulo 4 se presenta una revisión de las principales herramientas y estrategias utilizadas para aumentar la eficiencia de las implementaciones de los distintos algoritmos. En primer lugar se incluye una serie de consideraciones básicas para aumentar la eficiencia numérica de las implementaciones. A continuación se mencionan las principales características de los analizadores de códigos utilizados y también las librerías matemáticas utilizadas para resolver los problemas de álgebra lineal tanto con matrices densas como sparse. Por último se desarrolla con un cierto detalle el tema de la paralelización en los actuales procesadores de varios núcleos, describiendo para ello el patrón empleado y las características más importantes de las dos herramientas propuestas, OpenMP y las TBB de Intel. Hay que señalar que las características de los sistemas multicuerpo problemas de pequeño tamaño, frecuente uso de la recursividad, y repetición intensiva en el tiempo de los cálculos con fuerte dependencia de los resultados anteriores dificultan extraordinariamente el uso de técnicas de paralelización frente a otras áreas de la mecánica computacional, tales como por ejemplo el cálculo por elementos finitos. Basándose en los conceptos mencionados en el Capítulo 4, el Capítulo 5 está dividido en tres secciones, una para cada formulación propuesta en esta Tesis. En cada una de estas secciones se describen los detalles de cómo se han realizado las distintas implementaciones propuestas para cada algoritmo y qué herramientas se han utilizado para ello. En la primera sección se muestra el uso de librerías numéricas para matrices densas y sparse en la formulación topológica semirrecursiva basada en la doble transformación de velocidades. En la segunda se describe la utilización de paralelización mediante OpenMP y TBB en la formulación semirrecursiva con penalizadores y proyecciones. Por último, se describe el uso de técnicas de matrices sparse y paralelización en la formulación global con uniones flexibles y parámetros de Euler. El Capítulo 6 describe los resultados alcanzados mediante las formulaciones e implementaciones descritas previamente. Este capítulo comienza con una descripción de la modelización y topología de los dos vehículos estudiados. El primer modelo es un vehículo de dos ejes del tipo chasis-cabina o furgoneta, perteneciente a la gama de vehículos de carga medianos. El segundo es un vehículo de cinco ejes que responde al modelo de un camión o cabina con semirremolque, perteneciente a la categoría de vehículos industriales pesados. En este capítulo además se realiza un estudio comparativo entre las simulaciones de estos vehículos con cada una de las formulaciones utilizadas y se presentan de modo cuantitativo los efectos de las mejoras alcanzadas con las distintas estrategias propuestas en esta Tesis. Con objeto de extraer conclusiones más fácilmente y para evaluar de un modo más objetivo las mejoras introducidas en la Tesis, todos los resultados de este capítulo se han obtenido con el mismo computador, que era el top de la gama Intel Xeon en 2007, pero que hoy día está ya algo obsoleto. Por último los Capítulos 7 y 8 están dedicados a las conclusiones finales y las futuras líneas de investigación que pueden derivar del trabajo realizado en esta Tesis. Los objetivos de realizar simulaciones en tiempo real de vehículos industriales de gran complejidad han sido alcanzados con varias de las formulaciones e implementaciones desarrolladas. ABSTRACT The objective of this Dissertation has been the achievement of real time simulations of industrial vehicles modeled as complex multibody systems made up by rigid bodies. For the development of a simulation program, four main aspects must be considered: the modeling of the multibody system (types of coordinates, ideal joints or imposed by means of forces), the formulation to be used to set the differential equations of motion (dependent or independent coordinates, global or topological methods, ways to impose constraints equations), the method of numerical integration to solve these equations in time (explicit or implicit integrators) and the details of the implementation carried out (programming language, mathematical libraries, parallelization techniques). These four stages are interrelated and all of them are part of this work. They involve the generation of models for a van and a semitrailer truck, the use of three different dynamic formulations, the integration of differential equations of motion through explicit and implicit methods, the use of BLAS functions and sparse matrix techniques, and the introduction of parallelization to use the different processor cores. The work presented in this Dissertation has been structured in eight chapters, the first of them being the Introduction. In Chapter 2, two different semi-recursive formulations are shown, of which the first one is based on a double velocity transformation, thus getting the differential equations of motion as a function of the independent relative accelerations. The numerical integration of these equations has been made with the Runge-Kutta explicit method of fourth order. The second formulation is based on dependent relative coordinates, imposing the constraints by means of position penalty coefficients and correcting the velocities and accelerations by projection methods. In this second case, the integration of the motion equations has been carried out by means of the HHT implicit integrator (Hilber, Hughes and Taylor), which belongs to the Newmark structural integrators family. In Chapter 3, the third formulation used in this Dissertation is presented. In this case, the joints between the bodies of the system have been considered as flexible joints, with forces used to impose the joint conditions. This kind of union hinders to work with relative coordinates, so the position of the system bodies and the setting of the equations of motion have been carried out using Cartesian coordinates and Euler parameters. In this global formulation, constraints are introduced through forces (with a similar approach to the penalty coefficients) are presented. The stabilization of the numerical integration is carried out also by velocity and accelerations projections. In Chapter 4, a revision of the main computer tools and strategies used to increase the efficiency of the implementations of the algorithms is presented. First of all, some basic considerations to increase the numerical efficiency of the implementations are included. Then the main characteristics of the code’ analyzers used and also the mathematical libraries used to solve linear algebra problems (both with dense and sparse matrices) are mentioned. Finally, the topic of parallelization in current multicore processors is developed thoroughly. For that, the pattern used and the most important characteristics of the tools proposed, OpenMP and Intel TBB, are described. It needs to be highlighted that the characteristics of multibody systems small size problems, frequent recursion use and intensive repetition along the time of the calculation with high dependencies of the previous results complicate extraordinarily the use of parallelization techniques against other computational mechanics areas, as the finite elements computation. Based on the concepts mentioned in Chapter 4, Chapter 5 is divided into three sections, one for each formulation proposed in this Dissertation. In each one of these sections, the details of how these different proposed implementations have been made for each algorithm and which tools have been used are described. In the first section, it is shown the use of numerical libraries for dense and sparse matrices in the semirecursive topological formulation based in the double velocity transformation. In the second one, the use of parallelization by means OpenMP and TBB is depicted in the semi-recursive formulation with penalization and projections. Lastly, the use of sparse matrices and parallelization techniques is described in the global formulation with flexible joints and Euler parameters. Chapter 6 depicts the achieved results through the formulations and implementations previously described. This chapter starts with a description of the modeling and topology of the two vehicles studied. The first model is a two-axle chassis-cabin or van like vehicle, which belongs to the range of medium charge vehicles. The second one is a five-axle vehicle belonging to the truck or cabin semi-trailer model, belonging to the heavy industrial vehicles category. In this chapter, a comparative study is done between the simulations of these vehicles with each one of the formulations used and the improvements achieved are presented in a quantitative way with the different strategies proposed in this Dissertation. With the aim of deducing the conclusions more easily and to evaluate in a more objective way the improvements introduced in the Dissertation, all the results of this chapter have been obtained with the same computer, which was the top one among the Intel Xeon range in 2007, but which is rather obsolete today. Finally, Chapters 7 and 8 are dedicated to the final conclusions and the future research projects that can be derived from the work presented in this Dissertation. The objectives of doing real time simulations in high complex industrial vehicles have been achieved with the formulations and implementations developed.
Resumo:
The effect of soiling in flat PV modules has been already studied, causing a reduction of the electrical output of 4% on average. For CPV's, as far as soiling produces light scattering at the optical collector surface, the scattered rays should be definitively lost because they cannot be focused onto the receivers again. While the theoretical study becomes difficult because soiling is variable at different sites, it becomes easier to begin the monitoring of the real field performance of concentrators and then raise the following question: how much does the soiling affect to PV concentrators in comparison with flat panels?? The answers allow to predict the PV concentrator electrical performance and to establish a pattern of cleaning frequency. Some experiments have been conducted at the IES-UPM and CSES-ANU sites, consisting in linear reflective concentration systems, a point focus refractive concentrator and a flat module. All the systems have been measured when soiled and then after cleaning, achieving different increases of ISC. In general, results show that CPV systems are more sensitive to soiling than flat panels, accumulating losses in ISC of about 14% on average in three different tests conducted at IESUPM and CSES-ANU test sites in Madrid (Spain) and Canberra (Australia). Some concentrators can reach losses up to 26% when the system is soiled for 4 months of exposure.
Resumo:
In this paper new results on personalized PageRank are shown. We consider directed graphs that may contain dangling nodes. The main result presented gives an analytical characterization of all the possible values of the personalized PageRank for any node.We use this result to give a theoretical justification of a recent model that uses the personalized PageRank to classify users of Social Networks Sites. We introduce new concepts concerning competitivity and leadership in complex networks. We also present some theoretical techniques to locate leaders and competitors which are valid for any personalization vector and by using only information related to the adjacency matrix of the graph and the distribution of its dangling nodes.