969 resultados para ZERO-RANGE PROCESS
Resumo:
We propose a long range, high precision optical time domain reflectometry (OTDR) based on an all-fiber supercontinuum source. The source simply consists of a CW pump laser with moderate power and a section of fiber, which has a zero dispersion wavelength near the laser's central wavelength. Spectrum and time domain properties of the source are investigated, showing that the source has great capability in nonlinear optics, such as correlation OTDR due to its ultra-wide-band chaotic behavior, and mm-scale spatial resolution is demonstrated. Then we analyze the key factors limiting the operational range of such an OTDR, e. g., integral Rayleigh backscattering and the fiber loss, which degrades the optical signal to noise ratio at the receiver side, and then the guideline for counter-act such signal fading is discussed. Finally, we experimentally demonstrate a correlation OTDR with 100km sensing range and 8.2cm spatial resolution (1.2 million resolved points), as a verification of theoretical analysis.
Resumo:
Hazardous radioactive liquid waste is the legacy of more than 50 years of plutonium production associated with the United States' nuclear weapons program. It is estimated that more than 245,000 tons of nitrate wastes are stored at facilities such as the single-shell tanks (SST) at the Hanford Site in the state of Washington, and the Melton Valley storage tanks at Oak Ridge National Laboratory (ORNL) in Tennessee. In order to develop an innovative, new technology for the destruction and immobilization of nitrate-based radioactive liquid waste, the United State Department of Energy (DOE) initiated the research project which resulted in the technology known as the Nitrate to Ammonia and Ceramic (NAC) process. However, inasmuch as the nitrate anion is highly mobile and difficult to immobilize, especially in relatively porous cement-based grout which has been used to date as a method for the immobilization of liquid waste, it presents a major obstacle to environmental clean-up initiatives. Thus, in an effort to contribute to the existing body of knowledge and enhance the efficacy of the NAC process, this research involved the experimental measurement of the rheological and heat transfer behaviors of the NAC product slurry and the determination of the optimal operating parameters for the continuous NAC chemical reaction process. Test results indicate that the NAC product slurry exhibits a typical non-Newtonian flow behavior. Correlation equations for the slurry's rheological properties and heat transfer rate in a pipe flow have been developed; these should prove valuable in the design of a full-scale NAC processing plant. The 20-percent slurry exhibited a typical dilatant (shear thickening) behavior and was in the turbulent flow regime due to its lower viscosity. The 40-percent slurry exhibited a typical pseudoplastic (shear thinning) behavior and remained in the laminar flow regime throughout its experimental range. The reactions were found to be more efficient in the lower temperature range investigated. With respect to leachability, the experimental final NAC ceramic waste form is comparable to the final product of vitrification, the technology chosen by DOE to treat these wastes. As the NAC process has the potential of reducing the volume of nitrate-based radioactive liquid waste by as much as 70 percent, it not only promises to enhance environmental remediation efforts but also effect substantial cost savings. ^
Resumo:
A model was tested to examine relationships among leadership behaviors, team diversity, and team process measures with team performance and satisfaction at both the team and leader-member levels of analysis. Relationships between leadership behavior and team demographic and cognitive diversity were hypothesized to have both direct effects on organizational outcomes as well as indirect effects through team processes. Leader member differences were investigated to determine the effects of leader-member diversity leader-member exchange quality, individual effectiveness and satisfaction.^ Leadership had little direct effect on team performance, but several strong positive indirect effects through team processes. Demographic Diversity had no impact on team processes, directly impacted only one performance measure, and moderated the leadership to team process relationship.^ Cognitive Diversity had a number of direct and indirect effects on team performance, the net effects uniformly positive, and did not moderate the leadership to team process relationship.^ In sum, the team model suggests a complex combination of leadership behaviors positively impacting team processes, demographic diversity having little impact on team process or performance, cognitive diversity having a positive net impact impact, and team processes having mixed effects on team outcomes.^ At the leader-member level, leadership behaviors were a strong predictor of Leader-Member Exchange (LMX) quality. Leader-member demographic and cognitive dissimilarity were each predictors of LMX quality, but failed to moderate the leader behavior to LMX quality relationship. LMX quality was strongly and positively related to self reported effectiveness and satisfaction.^ The study makes several contributions to the literature. First, it explicitly links leadership and team diversity. Second, demographic and cognitive diversity are conceptualized as distinct and multi-faceted constructs. Third, a methodology for creating an index of categorical demographic and interval cognitive measures is provided so that diversity can be measured in a holistic conjoint fashion. Fourth, the study simultaneously investigates the impact of diversity at the team and leader-member levels of analyses. Fifth, insights into the moderating impact of different forms of team diversity on the leadership to team process relationship are provided. Sixth, this study incorporates a wide range of objective and independent measures to provide a 360$\sp\circ$ assessment of team performance. ^
Resumo:
The most important factor that affects the decision making process in finance is the risk which is usually measured by variance (total risk) or systematic risk (beta). Since investors’ sentiment (whether she is an optimist or pessimist) plays a very important role in the choice of beta measure, any decision made for the same asset within the same time horizon will be different for different individuals. In other words, there will neither be homogeneity of beliefs nor the rational expectation prevalent in the market due to behavioral traits. This dissertation consists of three essays. In the first essay, “ Investor Sentiment and Intrinsic Stock Prices”, a new technical trading strategy was developed using a firm specific individual sentiment measure. This behavioral based trading strategy forecasts a range within which a stock price moves in a particular period and can be used for stock trading. Results indicate that sample firms trade within a range and give signals as to when to buy or sell. In the second essay, “Managerial Sentiment and the Value of the Firm”, examined the effect of managerial sentiment on the project selection process using net present value criterion and also effect of managerial sentiment on the value of firm. Final analysis reported that high sentiment and low sentiment managers obtain different values for the same firm before and after the acceptance of a project. Changes in the cost of capital, weighted cost of average capital were found due to managerial sentiment. In the last essay, “Investor Sentiment and Optimal Portfolio Selection”, analyzed how the investor sentiment affects the nature and composition of the optimal portfolio as well as the portfolio performance. Results suggested that the choice of the investor sentiment completely changes the portfolio composition, i.e., the high sentiment investor will have a completely different choice of assets in the portfolio in comparison with the low sentiment investor. The results indicated the practical application of behavioral model based technical indicator for stock trading. Additional insights developed include the valuation of firms with a behavioral component and the importance of distinguishing portfolio performance based on sentiment factors.
Resumo:
We develop a new autoregressive conditional process to capture both the changes and the persistency of the intraday seasonal (U-shape) pattern of volatility in essay 1. Unlike other procedures, this approach allows for the intraday volatility pattern to change over time without the filtering process injecting a spurious pattern of noise into the filtered series. We show that prior deterministic filtering procedures are special cases of the autoregressive conditional filtering process presented here. Lagrange multiplier tests prove that the stochastic seasonal variance component is statistically significant. Specification tests using the correlogram and cross-spectral analyses prove the reliability of the autoregressive conditional filtering process. In essay 2 we develop a new methodology to decompose return variance in order to examine the informativeness embedded in the return series. The variance is decomposed into the information arrival component and the noise factor component. This decomposition methodology differs from previous studies in that both the informational variance and the noise variance are time-varying. Furthermore, the covariance of the informational component and the noisy component is no longer restricted to be zero. The resultant measure of price informativeness is defined as the informational variance divided by the total variance of the returns. The noisy rational expectations model predicts that uninformed traders react to price changes more than informed traders, since uninformed traders cannot distinguish between price changes caused by information arrivals and price changes caused by noise. This hypothesis is tested in essay 3 using intraday data with the intraday seasonal volatility component removed, as based on the procedure in the first essay. The resultant seasonally adjusted variance series is decomposed into components caused by unexpected information arrivals and by noise in order to examine informativeness.
Resumo:
We have obtained total and differential cross sections for the strangeness changing charged current weak reaction ν L + p → Λ(Σ0) + L+ using standard dipole form factors, where L stands for an electron, muon, or tau lepton, and L + stands for an positron, anti-muon or anti-tau lepton. We calculated these reactions from near threshold few hundred MeV to 8 GeV of incoming neutrino energy and obtained the contributions of the various form factors to the total and differential cross sections. We did this in support of possible experiments which might be carried out by the MINERνA collaboration at Fermilab. The calculation is phenomenologically based and makes use of SU(3) relations to obtain the standard vector current form factors and data from Λ beta decay to obtain the axial current form factor. We also made estimates for the contributions of the pseudoscalar form factor and for the F E and FS form factors to the total and differential cross sections. We discuss our results and consider under what circumstances we might extract the various form factors. In particular we wish to test the SU(3) assumptions made in determining all the form factors over a range of q2 values. Recently new form factors were obtained from recoil proton measurements in electron-proton electromagnetic scattering at Jefferson Lab. We thus calculated the contributions of the individual form factors to the total and differential cross sections for this new set of form factors. We found that the differential and total cross sections for Λ production change only slightly between the two sets of form factors but that the differential and total cross sections change substantially for Σ 0 production. We discuss the possibility of distinguishing between the two cases for the experiments planned by the MINERνA Collaboration. We also undertook the calculation for the inverse reaction e − + p → Λ + νe for a polarized outgoing Λ which might be performed at Jefferson Lab, and provided additional analysis of the contributions of the individual form factors to the differential cross sections for this case. ^
Resumo:
In the U.S., construction accidents remain a significant economic and social problem. Despite recent improvement, the Construction industry, generally, has lagged behind other industries in implementing safety as a total management process for achieving zero accidents and developing a high-performance safety culture. One aspect of this total approach to safety that has frustrated the construction industry the most has been “measurement”, which involves identifying and quantifying the factors that critically influence safe work behaviors. The basic problem attributed is the difficulty in assessing what to measure and how to measure it—particularly the intangible aspects of safety. Without measurement, the notion of continuous improvement is hard to follow. This research was undertaken to develop a strategic framework for the measurement and continuous improvement of total safety in order to achieve and sustain the goal of zero accidents, while improving the quality, productivity and the competitiveness of the construction industry as it moves forward. The research based itself on an integral model of total safety that allowed decomposition of safety into interior and exterior characteristics using a multiattribute analysis technique. Statistical relationships between total safety dimensions and safety performance (measured by safe work behavior) were revealed through a series of latent variables (factors) that describe the total safety environment of a construction organization. A structural equation model (SEM) was estimated for the latent variables to quantify relationships among them and between these total safety determinants and safety performance of a construction organization. The developed SEM constituted a strategic framework for identifying, measuring, and continuously improving safety as a total concern for achieving and sustaining the goal of zero accidents.
Resumo:
The most important factor that affects the decision making process in finance is the risk which is usually measured by variance (total risk) or systematic risk (beta). Since investors' sentiment (whether she is an optimist or pessimist) plays a very important role in the choice of beta measure, any decision made for the same asset within the same time horizon will be different for different individuals. In other words, there will neither be homogeneity of beliefs nor the rational expectation prevalent in the market due to behavioral traits. This dissertation consists of three essays. In the first essay, Investor Sentiment and Intrinsic Stock Prices, a new technical trading strategy is developed using a firm specific individual sentiment measure. This behavioral based trading strategy forecasts a range within which a stock price moves in a particular period and can be used for stock trading. Results show that sample firms trade within a range and show signals as to when to buy or sell. The second essay, Managerial Sentiment and the Value of the Firm, examines the effect of managerial sentiment on the project selection process using net present value criterion and also effect of managerial sentiment on the value of firm. Findings show that high sentiment and low sentiment managers obtain different values for the same firm before and after the acceptance of a project. The last essay, Investor Sentiment and Optimal Portfolio Selection, analyzes how the investor sentiment affects the nature and composition of the optimal portfolio as well as the performance measures. Results suggest that the choice of the investor sentiment completely changes the portfolio composition, i.e., the high sentiment investor will have a completely different choice of assets in the portfolio in comparison with the low sentiment investor. The results indicate the practical application of behavioral model based technical indicators for stock trading. Additional insights developed include the valuation of firms with a behavioral component and the importance of distinguishing portfolio performance based on sentiment factors.
Resumo:
A commonly held view is that creation of excessive domestic credit may lead to inflation problems, however, many economists uphold the possibility that, generous domestic credit under appropriate conditions will result in increases of output. This hypothesis is examined for Japan and Colombia for the period 1950-1993.^ Domestic credit theories are reviewed since the times of Thornton and Smith, until the recent times of Lewis, McKinnon, Stiglitz and of Japanese economists like K. Emi, Tachi R. and others. It is found that in Japan of the Post-War period, efficient financial markets and the decisive role of the government in orienting investment decisions seem to have influenced positively the effectiveness of domestic credit as an output-stimulating variable. On the contrary, in Colombia the absence of the above features seems to explain why domestic credit is not very effective as an output-stimulating variable.^ Multiple regression analyses show that domestic credit is a strong explanatory variable for output increases in Japan and a weak one for Colombia's case in the studied period. For Japan the correlation depicts a positive relationship between the two variables with a decreasing rate very similar to a typical production function. Moreover, the positive decreasing rate is confirmed if net domestic credit is used in the correlations. For Colombia a positive relationship is also found when accumulated domestic credit is used, but, if net domestic credit is the source of correlations, the positive decreasing rate is not obtained.^ Granger causality tests determined causality from domestic credit to output for Japan and no-causality for Colombia at the 1% significance level; the differences are explained by: (1) The low development level of the financial system in Colombia. (2) The nonexistence of consistent domestic credit policy to foster economic development. (3) The lack of an authoritative orientation in the allocation of financial resources and the nonexistence of long range industrialization programs in Colombia that could channel productively credit resources. For the system of equations relating domestic credit and exports, the Granger causality tests determined no-causality between domestic credit and exports for both Japan and Colombia also at the 1% significance level. ^
Resumo:
We develop a new autoregressive conditional process to capture both the changes and the persistency of the intraday seasonal (U-shape) pattern of volatility in essay 1. Unlike other procedures, this approach allows for the intraday volatility pattern to change over time without the filtering process injecting a spurious pattern of noise into the filtered series. We show that prior deterministic filtering procedures are special cases of the autoregressive conditional filtering process presented here. Lagrange multiplier tests prove that the stochastic seasonal variance component is statistically significant. Specification tests using the correlogram and cross-spectral analyses prove the reliability of the autoregressive conditional filtering process. In essay 2 we develop a new methodology to decompose return variance in order to examine the informativeness embedded in the return series. The variance is decomposed into the information arrival component and the noise factor component. This decomposition methodology differs from previous studies in that both the informational variance and the noise variance are time-varying. Furthermore, the covariance of the informational component and the noisy component is no longer restricted to be zero. The resultant measure of price informativeness is defined as the informational variance divided by the total variance of the returns. The noisy rational expectations model predicts that uninformed traders react to price changes more than informed traders, since uninformed traders cannot distinguish between price changes caused by information arrivals and price changes caused by noise. This hypothesis is tested in essay 3 using intraday data with the intraday seasonal volatility component removed, as based on the procedure in the first essay. The resultant seasonally adjusted variance series is decomposed into components caused by unexpected information arrivals and by noise in order to examine informativeness.
Resumo:
A heat loop suitable for the study of thermal fouling and its relationship to corrosion processes was designed, constructed and tested. The design adopted was an improvement over those used by such investigators as Hopkins and the Heat Transfer Research Institute in that very low levels of fouling could be detected accurately, the heat transfer surface could be readily removed for examination and the chemistry of the environment could be carefully monitored and controlled. In addition, an indirect method of electrical heating of the heat transfer surface was employed to eliminate magnetic and electric effects which result when direct resistance heating is employed to a test section. The testing of the loop was done using a 316 stainless steel test section and a suspension of ferric oxide and water in an attempt to duplicate the results obtained by Hopkins. Two types of thermal ·fouling resistance versus time curves were obtained . (i) Asymptotic type fouling curve, similar to the fouling behaviour described by Kern and Seaton and other investigators, was the most frequent type of fouling curve obtained. Thermal fouling occurred at a steadily decreasing rate before reaching a final asymptotic value. (ii) If an asymptotically fouled tube was cooled with rapid cir- ·culation for periods up to eight hours at zero heat flux, and heating restarted, fouling recommenced at a high linear rate. The fouling results obtained were observed to be similar and 1n agreement with the fouling behaviour reported previously by Hopkins and it was possible to duplicate quite closely the previous results . This supports the contention of Hopkins that the fouling results obtained were due to a crevice corrosion process and not an artifact of that heat loop which might have caused electrical and magnetic effects influencing the fouling. The effects of Reynolds number and heat flux on the asymptotic fouling resistance have been determined. A single experiment to study the effect of oxygen concentration has been carried out. The ferric oxide concentration for most of the fouling trials was standardized at 2400 ppM and the range of Reynolds number and heat flux for the study was 11000-29500 and 89-121 KW/M², respectively.
U.S. Army meteorologist Private Merle Coleman at White Sands Missile Range in Las Cruces, New Mexico
Resumo:
General note: Title and date provided by Bettye Lane.
Resumo:
General note: Title and date provided by Bettye Lane.
Resumo:
General note: Title and date provided by Bettye Lane.
Resumo:
The nanometer range structure produced by thin films of diblock copolymers makes them a great of interest as templates for the microelectronics industry. We investigated the effect of annealing solvents and/or mixture of the solvents in case of symmetric Poly (styrene-block-4vinylpyridine) (PS-b-P4VP) diblock copolymer to get the desired line patterns. In this paper, we used different molecular weights PS-b-P4VP to demonstrate the scalability of such high χ BCP system which requires precise fine-tuning of interfacial energies achieved by surface treatment and that improves the wetting property, ordering, and minimizes defect densities. Bare Silicon Substrates were also modified with polystyrene brush and ethylene glycol self-assembled monolayer in a simple quick reproducible way. Also, a novel and simple in situ hard mask technique was used to generate sub-7nm Iron oxide nanowires with a high aspect ratio on Silicon substrate, which can be used to develop silicon nanowires post pattern transfer.