84 resultados para Mathematical ability Testing
Development of an optimized methodology for tensile testing of carbon steels in hydrogen environment
Resumo:
The study was performed at OCAS, the Steel Research Centre of ArcelorMittal for the Industry market. The major aim of this research was to obtain an optimized tensile testing methodology with in-situ H-charging to reveal the hydrogen embrittlement in various high strength steels. The second aim of this study has been the mechanical characterization of the hydrogen effect on hight strength carbon steels with varying microstructure, i.e. ferrite-martensite and ferrite-bainite grades. The optimal parameters for H-charging - which influence the tensile test results (sample geometry type of electrolyte, charging methods effect of steel type, etc.) - were defined and applied to Slow Strain Rate testing, Incremental Step Loading and Constant Load Testing. To better understand the initiation and propagation of cracks during tensile testing with in-situ H-charging, and to make the correlation with crystallographic orientation, some materials have been analyzed in the SEM in combination with the EBSD technique. The introduction of a notch on the tensile samples permits to reach a significantly improved reproducibility of the results. Comparing the various steel grades reveals that Dual Phase (ferrite-martensite) steels are more sensitive to hydrogen induced cracking than the FB (ferritic-bainitic) ones. This higher sensitivity to hydrogen was found back in the reduced failure times, increased creep rates and enhanced crack initiation (SEM) for the Dual Phase steels in comparison with the FB steels.
Resumo:
This paper develops a simple model that can be used to estimate the effectiveness of Cohesion expenditure relative to similar but unsubsidized projects, thereby making it possible to explicitly test an important assumption that is often implicit in estimates of the impact of Cohesion policies. Some preliminary results are reported for the case of infrastructure investment in the Spanish regions.
Resumo:
The Keller-Segel system has been widely proposed as a model for bacterial waves driven by chemotactic processes. Current experiments on E. coli have shown precise structure of traveling pulses. We present here an alternative mathematical description of traveling pulses at a macroscopic scale. This modeling task is complemented with numerical simulations in accordance with the experimental observations. Our model is derived from an accurate kinetic description of the mesoscopic run-and-tumble process performed by bacteria. This model can account for recent experimental observations with E. coli. Qualitative agreements include the asymmetry of the pulse and transition in the collective behaviour (clustered motion versus dispersion). In addition we can capture quantitatively the main characteristics of the pulse such as the speed and the relative size of tails. This work opens several experimental and theoretical perspectives. Coefficients at the macroscopic level are derived from considerations at the cellular scale. For instance the stiffness of the signal integration process turns out to have a strong effect on collective motion. Furthermore the bottom-up scaling allows to perform preliminary mathematical analysis and write efficient numerical schemes. This model is intended as a predictive tool for the investigation of bacterial collective motion.
Resumo:
We study the properties of the well known Replicator Dynamics when applied to a finitely repeated version of the Prisoners' Dilemma game. We characterize the behavior of such dynamics under strongly simplifying assumptions (i.e. only 3 strategies are available) and show that the basin of attraction of defection shrinks as the number of repetitions increases. After discussing the difficulties involved in trying to relax the 'strongly simplifying assumptions' above, we approach the same model by means of simulations based on genetic algorithms. The resulting simulations describe a behavior of the system very close to the one predicted by the replicator dynamics without imposing any of the assumptions of the mathematical model. Our main conclusion is that mathematical and computational models are good complements for research in social sciences. Indeed, while computational models are extremely useful to extend the scope of the analysis to complex scenarios hard to analyze mathematically, formal models can be useful to verify and to explain the outcomes of computational models.
Resumo:
We derive necessary and sufficient conditions under which a set of variables is informationally sufficient, i.e. it contains enough information to estimate the structural shocks with a VAR model. Based on such conditions, we suggest a procedure to test for informational sufficiency. Moreover, we show how to amend the VAR if informational sufficiency is rejected. We apply our procedure to a VAR including TFP, unemployment and per-capita hours worked. We find that the three variables are not informationally sufficient. When adding missing information, the effects of technology shocks change dramatically.
Resumo:
A mathematical model is developed to analyse the combined flow and solidification of a liquid in a small pipe or two-dimensional channel. In either case the problem reduces to solving a single equation for the position of the solidification front. Results show that for a large range of flow rates the closure time is approximately constant, and the value depends primarily on the wall temperature and channel width. However, the ice shape at closure will be very different for low and high fluxes. As the flow rate increases the closure time starts to depend on the flow rate until the closure time increases dramatically, subsequently the pipe will never close.
Resumo:
El objetivo del proyecto es el desarrollo de una herramienta de trabajo para un departamento de Calidad. A través de ella, se deben poder ejecutar unos test automatizados sobre unas funcionalidades que tiene la aplicación Logic Class: el Cálculo de Nómina y Seguros Sociales.
Resumo:
Emergent molecular measurement methods, such as DNA microarray, qRTPCR, andmany others, offer tremendous promise for the personalized treatment of cancer. Thesetechnologies measure the amount of specific proteins, RNA, DNA or other moleculartargets from tumor specimens with the goal of “fingerprinting” individual cancers. Tumorspecimens are heterogeneous; an individual specimen typically contains unknownamounts of multiple tissues types. Thus, the measured molecular concentrations resultfrom an unknown mixture of tissue types, and must be normalized to account for thecomposition of the mixture.For example, a breast tumor biopsy may contain normal, dysplastic and cancerousepithelial cells, as well as stromal components (fatty and connective tissue) and bloodand lymphatic vessels. Our diagnostic interest focuses solely on the dysplastic andcancerous epithelial cells. The remaining tissue components serve to “contaminate”the signal of interest. The proportion of each of the tissue components changes asa function of patient characteristics (e.g., age), and varies spatially across the tumorregion. Because each of the tissue components produces a different molecular signature,and the amount of each tissue type is specimen dependent, we must estimate the tissuecomposition of the specimen, and adjust the molecular signal for this composition.Using the idea of a chemical mass balance, we consider the total measured concentrationsto be a weighted sum of the individual tissue signatures, where weightsare determined by the relative amounts of the different tissue types. We develop acompositional source apportionment model to estimate the relative amounts of tissuecomponents in a tumor specimen. We then use these estimates to infer the tissuespecificconcentrations of key molecular targets for sub-typing individual tumors. Weanticipate these specific measurements will greatly improve our ability to discriminatebetween different classes of tumors, and allow more precise matching of each patient tothe appropriate treatment
Resumo:
This article discusses the lessons learned from developing and delivering the Vocational Management Training for the European Tourism Industry (VocMat) online training programme, which was aimed at providing flexible, online distance learning for the European tourism industry. The programme was designed to address managers ‘need for flexible, senior management level training which they could access at a time and place which fitted in with their working and non-work commitments. The authors present two main approaches to using the Virtual Learning Environment, the feedback from the participants, and the implications of online Technology in extending tourism training opportunities
Resumo:
The main instrument used in psychological measurement is the self-report questionnaire. One of its majordrawbacks however is its susceptibility to response biases. A known strategy to control these biases hasbeen the use of so-called ipsative items. Ipsative items are items that require the respondent to makebetween-scale comparisons within each item. The selected option determines to which scale the weight ofthe answer is attributed. Consequently in questionnaires only consisting of ipsative items everyrespondent is allotted an equal amount, i.e. the total score, that each can distribute differently over thescales. Therefore this type of response format yields data that can be considered compositional from itsinception.Methodological oriented psychologists have heavily criticized this type of item format, since the resultingdata is also marked by the associated unfavourable statistical properties. Nevertheless, clinicians havekept using these questionnaires to their satisfaction. This investigation therefore aims to evaluate bothpositions and addresses the similarities and differences between the two data collection methods. Theultimate objective is to formulate a guideline when to use which type of item format.The comparison is based on data obtained with both an ipsative and normative version of threepsychological questionnaires, which were administered to 502 first-year students in psychology accordingto a balanced within-subjects design. Previous research only compared the direct ipsative scale scoreswith the derived ipsative scale scores. The use of compositional data analysis techniques also enables oneto compare derived normative score ratios with direct normative score ratios. The addition of the secondcomparison not only offers the advantage of a better-balanced research strategy. In principle it also allowsfor parametric testing in the evaluation
Resumo:
Catadioptric sensors are combinations of mirrors and lenses made in order to obtain a wide field of view. In this paper we propose a new sensor that has omnidirectional viewing ability and it also provides depth information about the nearby surrounding. The sensor is based on a conventional camera coupled with a laser emitter and two hyperbolic mirrors. Mathematical formulation and precise specifications of the intrinsic and extrinsic parameters of the sensor are discussed. Our approach overcomes limitations of the existing omni-directional sensors and eventually leads to reduced costs of production
Resumo:
A condition needed for testing nested hypotheses from a Bayesianviewpoint is that the prior for the alternative model concentratesmass around the small, or null, model. For testing independencein contingency tables, the intrinsic priors satisfy this requirement.Further, the degree of concentration of the priors is controlled bya discrete parameter m, the training sample size, which plays animportant role in the resulting answer regardless of the samplesize.In this paper we study robustness of the tests of independencein contingency tables with respect to the intrinsic priors withdifferent degree of concentration around the null, and comparewith other “robust” results by Good and Crook. Consistency ofthe intrinsic Bayesian tests is established.We also discuss conditioning issues and sampling schemes,and argue that conditioning should be on either one margin orthe table total, but not on both margins.Examples using real are simulated data are given
Resumo:
Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.
Resumo:
Vegeu el resum a l'inici del document de l'arxiu adjunt
Resumo:
One of the tantalising remaining problems in compositional data analysis lies in how to deal with data sets in which there are components which are essential zeros. By anessential zero we mean a component which is truly zero, not something recorded as zero simply because the experimental design or the measuring instrument has not been sufficiently sensitive to detect a trace of the part. Such essential zeros occur inmany compositional situations, such as household budget patterns, time budgets,palaeontological zonation studies, ecological abundance studies. Devices such as nonzero replacement and amalgamation are almost invariably ad hoc and unsuccessful insuch situations. From consideration of such examples it seems sensible to build up amodel in two stages, the first determining where the zeros will occur and the secondhow the unit available is distributed among the non-zero parts. In this paper we suggest two such models, an independent binomial conditional logistic normal model and a hierarchical dependent binomial conditional logistic normal model. The compositional data in such modelling consist of an incidence matrix and a conditional compositional matrix. Interesting statistical problems arise, such as the question of estimability of parameters, the nature of the computational process for the estimation of both the incidence and compositional parameters caused by the complexity of the subcompositional structure, the formation of meaningful hypotheses, and the devising of suitable testing methodology within a lattice of such essential zero-compositional hypotheses. The methodology is illustrated by application to both simulated and real compositional data