946 resultados para Fundamentals in linear algebra
Resumo:
The high performance computing community has traditionally focused uniquely on the reduction of execution time, though in the last years, the optimization of energy consumption has become a main issue. A reduction of energy usage without a degradation of performance requires the adoption of energy-efficient hardware platforms accompanied by the development of energy-aware algorithms and computational kernels. The solution of linear systems is a key operation for many scientific and engineering problems. Its relevance has motivated an important amount of work, and consequently, it is possible to find high performance solvers for a wide variety of hardware platforms. In this work, we aim to develop a high performance and energy-efficient linear system solver. In particular, we develop two solvers for a low-power CPU-GPU platform, the NVIDIA Jetson TK1. These solvers implement the Gauss-Huard algorithm yielding an efficient usage of the target hardware as well as an efficient memory access. The experimental evaluation shows that the novel proposal reports important savings in both time and energy-consumption when compared with the state-of-the-art solvers of the platform.
Resumo:
Reasoning systems have reached a high degree of maturity in the last decade. However, even the most successful systems are usually not general purpose problem solvers but are typically specialised on problems in a certain domain. The MathWeb SOftware Bus (Mathweb-SB) is a system for combining reasoning specialists via a common osftware bus. We described the integration of the lambda-clam systems, a reasoning specialist for proofs by induction, into the MathWeb-SB. Due to this integration, lambda-clam now offers its theorem proving expertise to other systems in the MathWeb-SB. On the other hand, lambda-clam can use the services of any reasoning specialist already integrated. We focus on the latter and describe first experimnents on proving theorems by induction using the computational power of the MAPLE system within lambda-clam.
Resumo:
Classical regression analysis can be used to model time series. However, the assumption that model parameters are constant over time is not necessarily adapted to the data. In phytoplankton ecology, the relevance of time-varying parameter values has been shown using a dynamic linear regression model (DLRM). DLRMs, belonging to the class of Bayesian dynamic models, assume the existence of a non-observable time series of model parameters, which are estimated on-line, i.e. after each observation. The aim of this paper was to show how DLRM results could be used to explain variation of a time series of phytoplankton abundance. We applied DLRM to daily concentrations of Dinophysis cf. acuminata, determined in Antifer harbour (French coast of the English Channel), along with physical and chemical covariates (e.g. wind velocity, nutrient concentrations). A single model was built using 1989 and 1990 data, and then applied separately to each year. Equivalent static regression models were investigated for the purpose of comparison. Results showed that most of the Dinophysis cf. acuminata concentration variability was explained by the configuration of the sampling site, the wind regime and tide residual flow. Moreover, the relationships of these factors with the concentration of the microalga varied with time, a fact that could not be detected with static regression. Application of dynamic models to phytoplankton time series, especially in a monitoring context, is discussed.
Resumo:
The aim of this paper is to provide a comprehensive study of some linear non-local diffusion problems in metric measure spaces. These include, for example, open subsets in ℝN, graphs, manifolds, multi-structures and some fractal sets. For this, we study regularity, compactness, positivity and the spectrum of the stationary non-local operator. We then study the solutions of linear evolution non-local diffusion problems, with emphasis on similarities and differences with the standard heat equation in smooth domains. In particular, we prove weak and strong maximum principles and describe the asymptotic behaviour using spectral methods.
Resumo:
Part 15: Performance Management Frameworks
Resumo:
Purpose: To study the in vivo metabolism of kurarinone, a lavandulyl flavanone which is a major constituent of Kushen and a marker compound with many biological activities, using ultra-performance liquid chromatography coupled with linear ion trap Orbitrap mass spectrometry (UPLC-LTQ-Orbitrap- MS). Methods: Six male Sprague-Dawley rats were randomly divided into two groups. First, kurarinone was suspended in 0.5 % carboxymethylcellulose sodium (CMC-Na) aqueous solution, and was given to rats (n = 3, 2 mL for each rat) orally at 50 mg/kg. A 2 mL aliquot of 0.5 % CMC-Na aqueous solution was administered to the rats in the control group. Next, urine samples were collected over 0-24 h after the oral administrations and all urine samples were pretreated by a solid phase extraction (SPE) method. Finally, all samples were analyzed by a UPLC-LTQ-Orbitrap mass spectrometry coupled with an electrospray ionization source (ESI) that was operated in the negative ionization mode. Results: A total of 11 metabolites, including the parent drug and 10 phase II metabolites in rat urine, were first detected and interpreted based on accurate mass measurement, fragment ions, and chromatographic retention times. The results were based on the assumption that kurarinone glucuronidation was the dominant metabolite that was excreted in rat urine. Conclusion: The results from this work indicate that kurarinone in vivo is typically transformed to nontoxic glucuronidation metabolites, and these findings may help to characterize the metabolic profile of kurarinone.
Resumo:
During the past decade, there has been a dramatic increase by postsecondary institutions in providing academic programs and course offerings in a multitude of formats and venues (Biemiller, 2009; Kucsera & Zimmaro, 2010; Lang, 2009; Mangan, 2008). Strategies pertaining to reapportionment of course-delivery seat time have been a major facet of these institutional initiatives; most notably, within many open-door 2-year colleges. Often, these enrollment-management decisions are driven by the desire to increase market-share, optimize the usage of finite facility capacity, and contain costs, especially during these economically turbulent times. So, while enrollments have surged to the point where nearly one in three 18-to-24 year-old U.S. undergraduates are community college students (Pew Research Center, 2009), graduation rates, on average, still remain distressingly low (Complete College America, 2011). Among the learning-theory constructs related to seat-time reapportionment efforts is the cognitive phenomenon commonly referred to as the spacing effect, the degree to which learning is enhanced by a series of shorter, separated sessions as opposed to fewer, more massed episodes. This ex post facto study explored whether seat time in a postsecondary developmental-level algebra course is significantly related to: course success; course-enrollment persistence; and, longitudinally, the time to successfully complete a general-education-level mathematics course. Hierarchical logistic regression and discrete-time survival analysis were used to perform a multi-level, multivariable analysis of a student cohort (N = 3,284) enrolled at a large, multi-campus, urban community college. The subjects were retrospectively tracked over a 2-year longitudinal period. The study found that students in long seat-time classes tended to withdraw earlier and more often than did their peers in short seat-time classes (p < .05). Additionally, a model comprised of nine statistically significant covariates (all with p-values less than .01) was constructed. However, no longitudinal seat-time group differences were detected nor was there sufficient statistical evidence to conclude that seat time was predictive of developmental-level course success. A principal aim of this study was to demonstrate—to educational leaders, researchers, and institutional-research/business-intelligence professionals—the advantages and computational practicability of survival analysis, an underused but more powerful way to investigate changes in students over time.
Resumo:
We treat the problem of existence of a location-then-price equilibrium in the circle model with a linear quadratic type of transportation cost function which can be either convex or concave. We show the existence of a unique perfect equilibrium for the concave case when the linear and quadratic terms are equal and of a unique perfect equilibrium for the convex case when the linear term is equal to zero. Aside from these two cases, there are feasible locations by the firms for which no equilibrium in the price subgame exists. Finally, we provide a full taxonomy of the price equilibrium regions in terms of weights of the linear and quadratic terms in the cost function.
Resumo:
Résumé : Les ions hydronium (H3O + ) sont formés, à temps courts, dans les grappes ou le long des trajectoires de la radiolyse de l'eau par des rayonnements ionisants à faible transfert d’énergie linéaire (TEL) ou à TEL élevé. Cette formation in situ de H3O + rend la région des grappes/trajectoires du rayonnement temporairement plus acide que le milieu environnant. Bien que des preuves expérimentales de l’acidité d’une grappe aient déjà été signalées, il n'y a que des informations fragmentaires quant à son ampleur et sa dépendance en temps. Dans ce travail, nous déterminons les concentrations en H3O + et les valeurs de pH correspondantes en fonction du temps à partir des rendements de H3O + calculés à l’aide de simulations Monte Carlo de la chimie intervenant dans les trajectoires. Quatre ions incidents de différents TEL ont été sélectionnés et deux modèles de grappe/trajectoire ont été utilisés : 1) un modèle de grappe isolée "sphérique" (faible TEL) et 2) un modèle de trajectoire "cylindrique" (TEL élevé). Dans tous les cas étudiés, un effet de pH acide brusque transitoire, que nous appelons un effet de "pic acide", est observé immédiatement après l’irradiation. Cet effet ne semble pas avoir été exploré dans l'eau ou un milieu cellulaire soumis à un rayonnement ionisant, en particulier à haut TEL. À cet égard, ce travail soulève des questions sur les implications possibles de cet effet en radiobiologie, dont certaines sont évoquées brièvement. Nos calculs ont ensuite été étendus à l’étude de l'influence de la température, de 25 à 350 °C, sur la formation in situ d’ions H3O + et l’effet de pic acide qui intervient à temps courts lors de la radiolyse de l’eau à faible TEL. Les résultats montrent une augmentation marquée de la réponse de pic acide à hautes températures. Comme de nombreux processus intervenant dans le cœur d’un réacteur nucléaire refroidi à l'eau dépendent de façon critique du pH, la question ici est de savoir si ces fortes variations d’acidité, même si elles sont hautement localisées et transitoires, contribuent à la corrosion et l’endommagement des matériaux.
Resumo:
Il progetto di tesi è incentrato sull’ottimizzazione del procedimento di taratura dei regolatori lineari degli anelli di controllo di posizione e velocità presenti negli azionamenti usati industrialmente su macchine automatiche, specialmente quando il carico è ad inerzia variabile in dipendenza dalla posizione, dunque non lineare, come ad esempio un quadrilatero articolato. Il lavoro è stato svolto in collaborazione con l’azienda G.D S.p.A. ed il meccanismo di prova è realmente utilizzato nelle macchine automatiche per il packaging di sigarette. L’ottimizzazione si basa sulla simulazione in ambiente Matlab/Simulink dell’intero sistema di controllo, cioè comprensivo del modello Simulink degli anelli di controllo del drive, inclusa la dinamica elettrica del motore, e del modello Simscape del meccanismo, perciò una prima necessaria fase del lavoro è stata la validazione di tali modelli affinché fossero sufficientemente fedeli al comportamento reale. Il secondo passo è stato fornire una prima taratura di tentativo che fungesse da punto di partenza per l’algoritmo di ottimizzazione, abbiamo fatto ciò linearizzando il modello meccanico con l’inerzia minima e utilizzando il metodo delle formule di inversione per determinare i parametri di controllo. Già questa taratura, seppur conservativa, ha portato ad un miglioramento delle performance del sistema rispetto alla taratura empirica comunemente fatta in ambito industriale. Infine, abbiamo lanciato l’algoritmo di ottimizzazione definendo opportunamente la funzione di costo, ed il risultato è stato decisamente positivo, portando ad un miglioramento medio del massimo errore di inseguimento di circa il 25%, ma anche oltre il 30% in alcuni casi.
Diffusive models and chaos indicators for non-linear betatron motion in circular hadron accelerators
Resumo:
Understanding the complex dynamics of beam-halo formation and evolution in circular particle accelerators is crucial for the design of current and future rings, particularly those utilizing superconducting magnets such as the CERN Large Hadron Collider (LHC), its luminosity upgrade HL-LHC, and the proposed Future Circular Hadron Collider (FCC-hh). A recent diffusive framework, which describes the evolution of the beam distribution by means of a Fokker-Planck equation, with diffusion coefficient derived from the Nekhoroshev theorem, has been proposed to describe the long-term behaviour of beam dynamics and particle losses. In this thesis, we discuss the theoretical foundations of this framework, and propose the implementation of an original measurement protocol based on collimator scans in view of measuring the Nekhoroshev-like diffusive coefficient by means of beam loss data. The available LHC collimator scan data, unfortunately collected without the proposed measurement protocol, have been successfully analysed using the proposed framework. This approach is also applied to datasets from detailed measurements of the impact on the beam losses of so-called long-range beam-beam compensators also at the LHC. Furthermore, dynamic indicators have been studied as a tool for exploring the phase-space properties of realistic accelerator lattices in single-particle tracking simulations. By first examining the classification performance of known and new indicators in detecting the chaotic character of initial conditions for a modulated Hénon map and then applying this knowledge to study the properties of realistic accelerator lattices, we tried to identify a connection between the presence of chaotic regions in the phase space and Nekhoroshev-like diffusive behaviour, providing new tools to the accelerator physics community.
Resumo:
This dissertation adopts a multidisciplinary approach to investigate graphical and formal features of Cretan Hieroglyphic and Linear A. Drawing on theories which understand inscribed artefacts as an interplay of materials, iconography, and texts, I combine archaeological and philological considerations with statistical and experimental observations. The work is formulated on three key-questions. The first deals with the origins of Cretan Hieroglyphic. After providing a fresh view on Prepalatial seals chronology, I identify a number of forerunners of Hieroglyphic signs in iconographic motifs attested among the Prepalatial glyptic and material culture. I further identified a specific style-group, i.e., the ‘Border and Leaf Complex’, as the decisive step towards the emergence of the Hieroglyphic graphic repertoire. The second deals with the interweaving of formal, iconographical, and epigraphic features of Hieroglyphic seals with the sequences they bear and the contexts of their usage. By means of two Correspondence Analyses, I showed that the iconography on seals in some materials and shapes is closer to Cretan Hieroglyphics, than that on the other ones. Through two Social Network Analyses, I showed that Hieroglyphic impressions, especially at Knossos, follow a precise sealing pattern due to their shapes and sequences. Furthermore, prisms with a high number of inscribed faces adhere to formal features of jasper ones. Finally, through experimental engravings, I showed differences in cutting rates among materials, as well as the efficiency of abrasives and tools unearthed within the Quartier Mu. The third question concerns overlaps in chronology, findspots and signaries between Cretan Hieroglyphic and Linear A. I discussed all possible earliest instances of both scripts and argued for some items datable to the MM I-IIA period. I further provide an insight into the Hieroglyphic-Linear A dubitanda and criteria for their interpretation. Finally, I suggest four different patterns in the creation and diversification of the two signaries.
Resumo:
Imaging technologies are widely used in application fields such as natural sciences, engineering, medicine, and life sciences. A broad class of imaging problems reduces to solve ill-posed inverse problems (IPs). Traditional strategies to solve these ill-posed IPs rely on variational regularization methods, which are based on minimization of suitable energies, and make use of knowledge about the image formation model (forward operator) and prior knowledge on the solution, but lack in incorporating knowledge directly from data. On the other hand, the more recent learned approaches can easily learn the intricate statistics of images depending on a large set of data, but do not have a systematic method for incorporating prior knowledge about the image formation model. The main purpose of this thesis is to discuss data-driven image reconstruction methods which combine the benefits of these two different reconstruction strategies for the solution of highly nonlinear ill-posed inverse problems. Mathematical formulation and numerical approaches for image IPs, including linear as well as strongly nonlinear problems are described. More specifically we address the Electrical impedance Tomography (EIT) reconstruction problem by unrolling the regularized Gauss-Newton method and integrating the regularization learned by a data-adaptive neural network. Furthermore we investigate the solution of non-linear ill-posed IPs introducing a deep-PnP framework that integrates the graph convolutional denoiser into the proximal Gauss-Newton method with a practical application to the EIT, a recently introduced promising imaging technique. Efficient algorithms are then applied to the solution of the limited electrods problem in EIT, combining compressive sensing techniques and deep learning strategies. Finally, a transformer-based neural network architecture is adapted to restore the noisy solution of the Computed Tomography problem recovered using the filtered back-projection method.
Resumo:
In recent years, vehicle acoustics have gained significant importance in new car development: increasingly advanced infotainment systems for spatial audio and sound enhancement algorithms have become the norm in modern vehicles. In the past, car manufacturers had to build numerous prototypes to study the sound behaviour inside the car cabin or the effect of new algorithms under development. Nowadays, advanced simulation techniques can reduce development costs and time. In this work, after selecting the reference test vehicle, a modern luxury sedan equipped with a high-end sound system, two independent tools were developed: a simulation tool created in the Comsol Multiphysics environment and an auralization tool developed in the Cycling ‘74 MAX environment. The simulation tool can calculate the impulse response and acoustic spectrum at a specific position inside the cockpit. Its input data are the vehicle’s geometry, acoustic absorption parameters of materials, the acoustic characteristics and position of loudspeakers, and the type and position of virtual microphones (or microphone arrays). The simulation tool can also provide binaural impulse responses thanks to Head Related Transfer Functions (HRTFs) and an innovative algorithm able to compute the HRTF at any distance and angle from the head. Impulse responses from simulations or acoustic measurements inside the car cabin are processed and fed into the auralization tool, enabling real-time interaction by applying filters, changing the channels gain or displaying the acoustic spectrum. Since the acoustic simulation of a vehicle involves multiple topics, the focus of this work has not only been the development of two tools but also the study and application of new techniques for acoustic characterization of the materials that compose the cockpit and the loudspeaker simulation. Specifically, three different methods have been applied for material characterization through the use of a pressure-velocity probe, a Laser Doppler Vibrometer (LDV), and a microphone array.