890 resultados para phi value analysis
Resumo:
"This pamphlet is a reprint, without change, of ORDP 40-2."
Resumo:
Statistical approaches to study extreme events require, by definition, long time series of data. In many scientific disciplines, these series are often subject to variations at different temporal scales that affect the frequency and intensity of their extremes. Therefore, the assumption of stationarity is violated and alternative methods to conventional stationary extreme value analysis (EVA) must be adopted. Using the example of environmental variables subject to climate change, in this study we introduce the transformed-stationary (TS) methodology for non-stationary EVA. This approach consists of (i) transforming a non-stationary time series into a stationary one, to which the stationary EVA theory can be applied, and (ii) reverse transforming the result into a non-stationary extreme value distribution. As a transformation, we propose and discuss a simple time-varying normalization of the signal and show that it enables a comprehensive formulation of non-stationary generalized extreme value (GEV) and generalized Pareto distribution (GPD) models with a constant shape parameter. A validation of the methodology is carried out on time series of significant wave height, residual water level, and river discharge, which show varying degrees of long-term and seasonal variability. The results from the proposed approach are comparable with the results from (a) a stationary EVA on quasi-stationary slices of non-stationary series and (b) the established method for non-stationary EVA. However, the proposed technique comes with advantages in both cases. For example, in contrast to (a), the proposed technique uses the whole time horizon of the series for the estimation of the extremes, allowing for a more accurate estimation of large return levels. Furthermore, with respect to (b), it decouples the detection of non-stationary patterns from the fitting of the extreme value distribution. As a result, the steps of the analysis are simplified and intermediate diagnostics are possible. In particular, the transformation can be carried out by means of simple statistical techniques such as low-pass filters based on the running mean and the standard deviation, and the fitting procedure is a stationary one with a few degrees of freedom and is easy to implement and control. An open-source MAT-LAB toolbox has been developed to cover this methodology, which is available at https://github.com/menta78/tsEva/(Mentaschi et al., 2016).
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
We show that diffusion can play an important role in protein-folding kinetics. We explicitly calculate the diffusion coefficient of protein folding in a lattice model. We found that diffusion typically is configuration- or reaction coordinate-dependent. The diffusion coefficient is found to be decreasing with respect to the progression of folding toward the native state, which is caused by the collapse to a compact state constraining the configurational space for exploration. The configuration- or position-dependent diffusion coefficient has a significant contribution to the kinetics in addition to the thermodynamic free-energy barrier. It effectively changes (increases in this case) the kinetic barrier height as well as the position of the corresponding transition state and therefore modifies the folding kinetic rates as well as the kinetic routes. The resulting folding time, by considering both kinetic diffusion and the thermodynamic folding free-energy profile, thus is slower than the estimation from the thermodynamic free-energy barrier with constant diffusion but is consistent with the results from kinetic simulations. The configuration- or coordinate-dependent diffusion is especially important with respect to fast folding, when there is a small or no free-energy barrier and kinetics is controlled by diffusion. Including the configurational dependence will challenge the transition state theory of protein folding. The classical transition state theory will have to be modified to be consistent. The more detailed folding mechanistic studies involving phi value analysis based on the classical transition state theory also will have to be modified quantitatively.
Resumo:
The pathway of protein folding is now being analyzed at the resolution of individual residues by kinetic measurements on suitably engineered mutants. The kinetic methods generally employed for studying folding are typically limited to the time range of > or = 1 ms because the folding of denatured proteins is usually initiated by mixing them with buffers that favor folding, and the dead time of rapid mixing experiments is about a millisecond. We now show that the study of protein folding may be extended to the microsecond time region by using temperature-jump measurements on the cold-unfolded state of a suitable protein. We are able to detect early events in the folding of mutants of barstar, the polypeptide inhibitor of barnase. A preliminary characterization of the fast phase from spectroscopic and phi-value analysis indicates that it is a transition between two relatively solvent-exposed states with little consolidation of structure.
Resumo:
Middle section module of InnoTrackTM moving walk was re-engineered according to value analysis process. Self-supporting steel structure for moving walk was created as a result of this process. Designed structure was verified and validated by prototype tests and finite element method calculations. Self-supporting steel structure replaces the original design of middle section module in InnoTrackTM. Designed structure provides higher satisfaction to customers’ needs and at the same time, it uses less resources. The redesigned middle section module provides higher value to the customer.
Resumo:
Coarse graining is a popular technique used in physics to speed up the computer simulation of molecular fluids. An essential part of this technique is a method that solves the inverse problem of determining the interaction potential or its parameters from the given structural data. Due to discrepancies between model and reality, the potential is not unique, such that stability of such method and its convergence to a meaningful solution are issues.rnrnIn this work, we investigate empirically whether coarse graining can be improved by applying the theory of inverse problems from applied mathematics. In particular, we use the singular value analysis to reveal the weak interaction parameters, that have a negligible influence on the structure of the fluid and which cause non-uniqueness of the solution. Further, we apply a regularizing Levenberg-Marquardt method, which is stable against the mentioned discrepancies. Then, we compare it to the existing physical methods - the Iterative Boltzmann Inversion and the Inverse Monte Carlo method, which are fast and well adapted to the problem, but sometimes have convergence problems.rnrnFrom analysis of the Iterative Boltzmann Inversion, we elaborate a meaningful approximation of the structure and use it to derive a modification of the Levenberg-Marquardt method. We engage the latter for reconstruction of the interaction parameters from experimental data for liquid argon and nitrogen. We show that the modified method is stable, convergent and fast. Further, the singular value analysis of the structure and its approximation allows to determine the crucial interaction parameters, that is, to simplify the modeling of interactions. Therefore, our results build a rigorous bridge between the inverse problem from physics and the powerful solution tools from mathematics. rn
Resumo:
Mode of access: Internet.
Resumo:
This study was an evaluation of a Field Project Model Curriculum and its impact on achievement, attitude toward science, attitude toward the environment, self-concept, and academic self-concept with at-risk eleventh and twelfth grade students. One hundred eight students were pretested and posttested on the Piers-Harris Children's Self-Concept Scale, PHCSC (1985); the Self-Concept as a Learner Scale, SCAL (1978); the Marine Science Test, MST (1987); the Science Attitude Inventory, SAI (1970); and the Environmental Attitude Scale, EAS (1972). Using a stratified random design, three groups of students were randomly assigned according to sex and stanine level, to three treatment groups. Group one received the field project method, group two received the field study method, and group three received the field trip method. All three groups followed the marine biology course content as specified by Florida Student Performance Objectives and Frameworks. The intervention occurred for ten months with each group participating in outside-of-classroom activities on a trimonthly basis. Analysis of covariance procedures were used to determine treatment effects. F-ratios, p-levels and t-tests at p $<$.0062 (.05/8) indicated that a significant difference existed among the three treatment groups. Findings indicated that groups one and two were significantly different from group three with group one displaying significantly higher results than group two. There were no significant differences between males and females in performance on the five dependent variables. The tenets underlying environmental education are congruent with the recommendations toward the reform of science education. These include a value analysis approach, inquiry methods, and critical thinking strategies that are applied to environmental issues. ^
Resumo:
Extreme stock price movements are of great concern to both investors and the entire economy. For investors, a single negative return, or a combination of several smaller returns, can possible wipe out so much capital that the firm or portfolio becomes illiquid or insolvent. If enough investors experience this loss, it could shock the entire economy. An example of such a case is the stock market crash of 1987. Furthermore, there has been a lot of recent interest regarding the increasing volatility of stock prices. ^ This study presents an analysis of extreme stock price movements. The data utilized was the daily returns for the Standard and Poor's 500 index from January 3, 1978 to May 31, 2001. Research questions were analyzed using the statistical models provided by extreme value theory. One of the difficulties in examining stock price data is that there is no consensus regarding the correct shape of the distribution function generating the data. An advantage with extreme value theory is that no detailed knowledge of this distribution function is required to apply the asymptotic theory. We focus on the tail of the distribution. ^ Extreme value theory allows us to estimate a tail index, which we use to derive bounds on the returns for very low probabilities on an excess. Such information is useful in evaluating the volatility of stock prices. There are three possible limit laws for the maximum: Gumbel (thick-tailed), Fréchet (thin-tailed) or Weibull (no tail). Results indicated that extreme returns during the time period studied follow a Fréchet distribution. Thus, this study finds that extreme value analysis is a valuable tool for examining stock price movements and can be more efficient than the usual variance in measuring risk. ^
Resumo:
In the article - Menu Analysis: Review and Evaluation - by Lendal H. Kotschevar, Distinguished Professor School of Hospitality Management, Florida International University, Kotschevar’s initial statement reads: “Various methods are used to evaluate menus. Some have quite different approaches and give different information. Even those using quite similar methods vary in the information they give. The author attempts to describe the most frequently used methods and to indicate their value. A correlation calculation is made to see how well certain of these methods agree in the information they give.” There is more than one way to look at the word menu. The culinary selections decided upon by the head chef or owner of a restaurant, which ultimately define the type of restaurant is one way. The physical outline of the food, which a patron actually holds in his or her hand, is another. These descriptions are most common to the word, menu. The author primarily concentrates on the latter description, and uses the act of counting the number of items sold on a menu to measure the popularity of any particular item. This, along with a formula, allows Kotschevar to arrive at a specific value per item. Menu analysis would appear a difficult subject to broach. How does a person approach a menu analysis, how do you qualify and quantify a menu; it seems such a subjective exercise. The author offers methods and outlines on approaching menu analysis from empirical perspectives. “Menus are often examined visually through the evaluation of various factors. It is a subjective method but has the advantage of allowing scrutiny of a wide range of factors which other methods do not,” says Distinguished Professor, Kotschevar. “The method is also highly flexible. Factors can be given a score value and scores summed to give a total for a menu. This allows comparison between menus. If the one making the evaluations knows menu values, it is a good method of judgment,” he further offers. The author wants you to know that assigning values is fundamental to a pragmatic menu analysis; it is how the reviewer keeps score, so to speak. Value merit provides reliable criteria from which to gauge a particular menu item. In the final analysis, menu evaluation provides the mechanism for either keeping or rejecting selected items on a menu. Kotschevar provides at least three different matrix evaluation methods; they are defined as the Miller method, the Smith and Kasavana method, and the Pavesic method. He offers illustrated examples of each via a table format. These are helpful tools since trying to explain the theories behind the tables would be difficult at best. Kotschevar also references examples of analysis methods which aren’t matrix based. The Hayes and Huffman - Goal Value Analysis - is one such method. The author sees no one method better than another, and suggests that combining two or more of the methods to be a benefit.
Resumo:
In many product categories, unit prices facilitate price comparisons across brands and package sizes; this enables consumers to identify those products that provide the greatest value. However in other product categories, unit prices may be confusing. This is because there are two types of unit pricing, measure-based and usage-based. Measure-based unit prices are what the name implies; price is expressed in cents or dollars per unit of measure (e.g. ounce). Usage-based unit prices, on the other hand, are expressed in terms of cents or dollars per use (e.g., wash load or serving). The results of this study show that in two different product categories (i.e., laundry detergent and dry breakfast cereal), measure-based unit prices reduced consumers’ ability to identify higher value products, but when a usage-based unit price was provided, their ability to identify product value was increased. When provided with both a measure-based and a usage-based unit price, respondents did not perform as well as when they were provided only a usage-based unit price, additional evidence that the measure-based unit price hindered consumers’ comparisons. Finally, the presence of two potential moderators, education about the meaning of the two measures and having to rank order the options in the choice set in terms of value before choosing, did not eliminate these effects.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores