10 resultados para Numeric Modelation
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
This study uses event-related brain potentials (ERPs) to investigate the electrophysiological correlates of numeric conflict monitoring in math-anxious individuals, by analyzing whether math anxiety is related to abnormal processing in early conflict detection (as shown by the N450 component) and/or in a later, response-related stage of processing (as shown by the conflict sustained potential; Conflict-SP). Conflict adaptation effects were also studied by analyzing the effect of the previous trial"s congruence in current interference. To this end, 17 low math-anxious (LMA)and 17 high math-anxious (HMA) individuals were presented with a numerical Stroop task. Groups were extreme in math anxiety but did not differ in trait or state anxiety or in simple math ability. The interference effect of the current trial (incongruent-congruent) and the interference effect preceded by congruence and by incongruity were analyzed both for behavioral measures and for ERPs. A greater interference effect was found for response times in the HMA group than in the LMA one. Regarding ERPs, the LMA group showed a greater N450 component for the interference effect preceded by congruence than when preceded by incongruity, while the HMA group showed greater Conflict-SP amplitude for the interference effect preceded by congruence than when preceded by incongruity. Our study showed that the electrophysiological correlates of numeric interference in HMA individuals comprise the absence of a conflict adaptation effect in the first stage of conflict processing (N450) and an abnormal subsequent up-regulation of cognitive control in order to overcome the conflict (Conflict-SP). More concretely, our study shows that math anxiety is related to a reactive and compensatory recruitment of control resources that is implemented only when previously exposed to a stimuli presenting conflicting information
Resumo:
The paper documents MINTOOLKIT for GNU Octave. MINTOOLKIT provides functions for minimization and numeric differentiation. The main algorithms are BFGS, LBFGS, and simulated annealing. Examples are given.
Resumo:
This paper presents an initial challenge to tackle the every so "tricky" points encountered when dealing with energy accounting, and thereafter illustrates how such a system of accounting can be used when assessing for the metabolic changes in societies. The paper is divided in four main sections. The first three, present a general discussion on the main issues encountered when conducting energy analyses. The last section, subsequently, combines this heuristic approach to the actual formalization of it, in quantitative terms, for the analysis of possible energy scenarios. Section one covers the broader issue of how to account for the relevant categories used when accounting for Joules of energy; emphasizing on the clear distinction between Primary Energy Sources (PES) (which are the physical exploited entities that are used to derive useable energy forms (energy carriers)) and Energy Carriers (EC) (the actual useful energy that is transmitted for the appropriate end uses within a society). Section two sheds light on the concept of Energy Return on Investment (EROI). Here, it is emphasized that, there must already be a certain amount of energy carriers available to be able to extract/exploit Primary Energy Sources to thereafter generate a net supply of energy carriers. It is pointed out that this current trend of intense energy supply has only been possible to the great use and dependence on fossil energy. Section three follows up on the discussion of EROI, indicating that a single numeric indicator such as an output/input ratio is not sufficient in assessing for the performance of energetic systems. Rather an integrated approach that incorporates (i) how big the net supply of Joules of EC can be, given an amount of extracted PES (the external constraints); (ii) how much EC needs to be invested to extract an amount of PES; and (iii) the power level that it takes for both processes to succeed, is underlined. Section four, ultimately, puts the theoretical concepts at play, assessing for how the metabolic performances of societies can be accounted for within this analytical framework.
Resumo:
L’estudi es dirigeix a avaluar la capacitat predictiva sobre la violència dels instruments de judici clínic estructurat, amb una metodologia que aporti resultats comparables a estudis de l’àmbit internacional. La investigació es va portar a terme en un hospital civil de salut mental i la mostra està composta per 114 pacients de les unitats de crònics i subaguts. A l’avaluació inicial, l’HCR-20, el PCL:SV i el Protocol 7 van ser els instruments utilitzats per a la recollida d’informació de les variables predictors. La variable depenent o resultat va ser registrada prospectivament per part de l’equip d’infermeria amb un instrument observacional de fàcil ús, el MOAS. Mitjançant índexs de correlació, càlcul de riscos relatius, i anàlisis de regressió logística i corbes ROC va ser possible conèixer que l’HCR-20 i el PCL-SV són mesures vàlides per a la predicció de la violència intrahospitalària en el curt i mig termini en una mostra espanyola de persones amb malaltia mental severa. L’HCR-20 i particularment els ítems clínics van ser els millors predictors de la violència física envers a persones i objectes. Tant la puntuació numèrica de l’HCR-20 com el judici clínic estructurat van demostrar una precisió predictiva alta i comparable a l'obtinguda amb la versió original de l'instrument. El PCL:SV va arribar una precisió predictiva moderada que va anar disminuint al llarg del seguiment. Altres factors de risc com les agressions o la ira prèvies a l’avaluació també van augmentar significativament el risc de violència durant l’any de seguiment.
Resumo:
Per a determinar la dinàmica espai-temporal completa d’un sistema quàntic tridimensional de N partícules cal integrar l’equació d’Schrödinger en 3N dimensions. La capacitat dels ordinadors actuals permet fer-ho com a molt en 3 dimensions. Amb l’objectiu de disminuir el temps de càlcul necessari per a integrar l’equació d’Schrödinger multidimensional, es realitzen usualment una sèrie d’aproximacions, com l’aproximació de Born–Oppenheimer o la de camp mig. En general, el preu que es paga en realitzar aquestes aproximacions és la pèrdua de les correlacions quàntiques (o entrellaçament). Per tant, és necessari desenvolupar mètodes numèrics que permetin integrar i estudiar la dinàmica de sistemes mesoscòpics (sistemes d’entre tres i unes deu partícules) i en els que es tinguin en compte, encara que sigui de forma aproximada, les correlacions quàntiques entre partícules. Recentment, en el context de la propagació d’electrons per efecte túnel en materials semiconductors, X. Oriols ha desenvolupat un nou mètode [Phys. Rev. Lett. 98, 066803 (2007)] per al tractament de les correlacions quàntiques en sistemes mesoscòpics. Aquesta nova proposta es fonamenta en la formulació de la mecànica quàntica de de Broglie– Bohm. Així, volem fer notar que l’enfoc del problema que realitza X. Oriols i que pretenem aquí seguir no es realitza a fi de comptar amb una eina interpretativa, sinó per a obtenir una eina de càlcul numèric amb la que integrar de manera més eficient l’equació d’Schrödinger corresponent a sistemes quàntics de poques partícules. En el marc del present projecte de tesi doctoral es pretén estendre els algorismes desenvolupats per X. Oriols a sistemes quàntics constituïts tant per fermions com per bosons, i aplicar aquests algorismes a diferents sistemes quàntics mesoscòpics on les correlacions quàntiques juguen un paper important. De forma específica, els problemes a estudiar són els següents: (i) Fotoionització de l’àtom d’heli i de l’àtom de liti mitjançant un làser intens. (ii) Estudi de la relació entre la formulació de X. Oriols amb la aproximació de Born–Oppenheimer. (iii) Estudi de les correlacions quàntiques en sistemes bi- i tripartits en l’espai de configuració de les partícules mitjançant la formulació de de Broglie–Bohm.
Resumo:
L'objectiu d'aquest projecte és que els nens i adolescents amb dolor crònic puguin gaudir d'una millor qualitat de vida. El projecte té dues línies de recerca complementàries. El primer objectiu específic és crear i adaptar instruments per avaluar l’experiència dolorosa a la població infantil. Dues són les mesures que s'han estudiat en escolars: l'escala numèrica verbal (vNRS-11) tant en paper com en format electrònic, i una versió modificada de la versió pediàtrica del Survey of Pain Attitudes (Peds-SOPA). El segon objectiu específic és avaluar els efectes de la teràpia cognitiva (TC) en una mostra de nens de 12 a 18 anys que pateixen dolor crònic. En concret, volem estudiar si algunes característiques personals i familiars dels joves (per exemple, creences relacionades amb la salut, intensitat del dolor, estratègies d'afrontament, expectatives del tractament) estan associades a l'adherència a les recomanacions terapèutiques i, en conseqüència, són variables que afavoreixen la recuperació d’aquests pacients. Un tractament de 10 sessions es porta a terme per aconseguir aquest objectiu. S’ofereix als pacients un conjunt d'habilitats i estratègies específiques per a què puguin exercir un major control dels seus símptomes i reduir l'impacte d'aquests en les seves vides. Els resultats d'aquests estudis seran de gran interès per millorar el maneig del dolor infantil. A més, els resultats determinaran quines són les variables associades amb l’adherència a les prescripcions terapèutiques. Aquest és un tema particularment d’interès pel fet de que un factor determinant de l’èxit clínic és el grau en què una persona s'adhereix a les recomanacions. D'altra banda, el desenvolupament de les mesures de dolor pediàtric és de gran rellevància tant per a clínics com per a investigadors, ja que moltes de les decisions clíniques es basen en allò que el pacient ha informat sobre el seu dolor.
Resumo:
Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.
Resumo:
Models incorporating more realistic models of customer behavior, as customers choosing from an offerset, have recently become popular in assortment optimization and revenue management. The dynamicprogram for these models is intractable and approximated by a deterministic linear program called theCDLP which has an exponential number of columns. When there are products that are being consideredfor purchase by more than one customer segment, CDLP is difficult to solve since column generationis known to be NP-hard. However, recent research indicates that a formulation based on segments withcuts imposing consistency (SDCP+) is tractable and approximates the CDLP value very closely. In thispaper we investigate the structure of the consideration sets that make the two formulations exactly equal.We show that if the segment consideration sets follow a tree structure, CDLP = SDCP+. We give acounterexample to show that cycles can induce a gap between the CDLP and the SDCP+ relaxation.We derive two classes of valid inequalities called flow and synchronization inequalities to further improve(SDCP+), based on cycles in the consideration set structure. We give a numeric study showing theperformance of these cycle-based cuts.
Resumo:
A new, quantitative, inference model for environmental reconstruction (transfer function), based for the first time on the simultaneous analysis of multigroup species, has been developed. Quantitative reconstructions based on palaeoecological transfer functions provide a powerful tool for addressing questions of environmental change in a wide range of environments, from oceans to mountain lakes, and over a range of timescales, from decades to millions of years. Much progress has been made in the development of inferences based on multiple proxies but usually these have been considered separately, and the different numeric reconstructions compared and reconciled post-hoc. This paper presents a new method to combine information from multiple biological groups at the reconstruction stage. The aim of the multigroup work was to test the potential of the new approach to making improved inferences of past environmental change by improving upon current reconstruction methodologies. The taxonomic groups analysed include diatoms, chironomids and chrysophyte cysts. We test the new methodology using two cold-environment training-sets, namely mountain lakes from the Pyrenees and the Alps. The use of multiple groups, as opposed to single groupings, was only found to increase the reconstruction skill slightly, as measured by the root mean square error of prediction (leave-one-out cross-validation), in the case of alkalinity, dissolved inorganic carbon and altitude (a surrogate for air-temperature), but not for pH or dissolved CO2. Reasons why the improvement was less than might have been anticipated are discussed. These can include the different life-forms, environmental responses and reaction times of the groups under study.
Resumo:
Mathematics has formed part of all our daily lives since the most remote origins of our civilization, although on too many occasions schools have done little for this functional view of mathematics reclaiming the formal role of this discipline in detriment of its more practical and applied element. From this perspective the curriculum organized by competence arises due to the need to fill this vacuum and allow our students to function better in the constant elements of real situations which they are going to have to resolve throughout their lives. The specifying of this general view of mathematic competence in the numeric competence should allow all our students to progressively acquire numeric sense, that is, that they have the capacity to apply good quantitative ideas in real situations