985 resultados para Analytical procedure
Resumo:
In this paper, common criterions about residual strength evaluation at home and abroad are generalized and seven methods are acquired, namely ASME-B31G, DM, Wes-2805-97, CVDA-84, Burdekin, Irwin and J integral methods. BP neural network are Combined with Genetic Algorithm (GA) named by modified BP-GA methods to successfully predict residual strength and critical pressure of injecting water, corrosion pipelines. Examples are shown that calculation results of every kind of method have great difference and calculating values of Wes-2805-97 criterion, ASME-B31G criterion, CVDA-84 criterion and Irwin fracture mechanics model are conservative and higher than, those of J integral methods while calculating values of Burdiken model and DM fracture mechanics model are dangerous and less than those of J integral methods and calculating values of modified BP-GA methods are close and moderate to those of J integral methods. Therefore modified BP-GA methods and J integral methods are considered better methods to calculate residual strength and critical pressure of injecting water corrosion pipelines
Resumo:
This document describes the analytical methods used to quantify core organic chemicals in tissue and sediment collected as part of NOAA’s National Status and Trends Program (NS&T) for the years 2000-2006. Organic contaminat analytical methods used during the early years of the program are described in NOAA Technical Memoranda NOS ORCA 71 and 130 (Lauenstein and Cantillo, 1993; Lauenstein and Cantillo, 1998) for the years 1984-1992 and 1993-1996, respectively. These reports are available from our website (http://www.ccma.nos.gov) The methods detailed in this document were utilized by the Mussel Watch Project and Bioeffects Project, which are both part of the NS&T program. The Mussel Watch Project has been monitoring contaminants in bivalves and sediments since 1986 and is the longest active national contaminant monitoring program operating in U.S. costal waters. Approximately 280 Mussel Watch sites are sampled on a biennial and decadal timescale for bivalve tissue and sediment respectively. Similarly, the Bioeffects Assessment Project began in 1986 to characterize estuaries and near coastal environs. Using the sediment quality triad approach that measures; (1) levels of contaminants in sediments, (2) incidence and severity of toxicity, and (3) benthic macrofaunal conmmunities, the Bioeffects Project describes the spatial extent of sediment toxicity. Contaminant assessment is a core function of both projects. These methods, while discussed here in the context of sediment and bivalve tissue, were also used with other matricies including: fish fillet, fish liver, nepheloid layer, and suspended particulate matter. The methods described herein are for the core organic contaminants monitored in the NS&T Program and include polycyclic aromatic hydrocarbons (PAHs), polychlorinated biphenyls (PCBs), butyltins, and organochlorines that have been analyzed consistently over the past 15-20 years. Organic contaminants such as dioxins, perfluoro compounds and polybrominated biphenyl ethers (PBDEs) were analyzed periodically in special studies of the NS&T Program and will be described in another document. All of the analytical techniques described in this document were used by B&B Laboratories, Inc, an affiliate of TDI-Brook International, Inc. in College Station, Texas under contract to NOAA. The NS&T Program uses a performance-based system approach to obtain the best possible data quality and comparability, and requires laboratories to demonstrate precision, accuracy, and sensitivity to ensure results-based performance goals and measures. (PDF contains 75 pages)
Resumo:
The Taylor series expansion method is used to analytically calculate the Eulerian and Lagrangian time correlations in turbulent shear flows. The short-time behaviors of those correlation functions can be obtained from the series expansions. Especially, the propagation velocity and sweeping velocity in the elliptic model of space-time correlation are analytically calculated and further simplified using the sweeping hypothesis and straining hypothesis. These two characteristic velocities mainly determine the space-time correlations.
Resumo:
Polycyclic aromatic hydrocarbons, butyltins, polychlorinated biphenyls, DDT and metabolites, other chlorinated pesticides, trace and major elements, and a number of measures of contaminant effects are quantified in bivalves and sediments collected as part of the NOAA National Status and Trends (NS&T) Program. This document contains descriptions of some of the sampling and analytical protocols used by NS&T contract laboratories from 1993 through 1996. (PDF contains 257 pages)
Resumo:
Along with the vast progress in experimental quantum technologies there is an increasing demand for the quantification of entanglement between three or more quantum systems. Theory still does not provide adequate tools for this purpose. The objective is, besides the quest for exact results, to develop operational methods that allow for efficient entanglement quantification. Here we put forward an analytical approach that serves both these goals. We provide a simple procedure to quantify Greenberger-Horne-Zeilinger-type multipartite entanglement in arbitrary three-qubit states. For two qubits this method is equivalent to Wootters' seminal result for the concurrence. It establishes a close link between entanglement quantification and entanglement detection by witnesses, and can be generalised both to higher dimensions and to more than three parties.
Resumo:
One of the major concerns in an Intelligent Transportation System (ITS) scenario, such as that which may be found on a long-distance train service, is the provision of efficient communication services, satisfying users' expectations, and fulfilling even highly demanding application requirements, such as safety-oriented services. In an ITS scenario, it is common to have a significant amount of onboard devices that comprise a cluster of nodes (a mobile network) that demand connectivity to the outside networks. This demand has to be satisfied without service disruption. Consequently, the mobility of the mobile network has to be managed. Due to the nature of mobile networks, efficient and lightweight protocols are desired in the ITS context to ensure adequate service performance. However, the security is also a key factor in this scenario. Since the management of the mobility is essential for providing communications, the protocol for managing this mobility has to be protected. Furthermore, there are safety-oriented services in this scenario, so user application data should also be protected. Nevertheless, providing security is expensive in terms of efficiency. Based on this considerations, we have developed a solution for managing the network mobility for ITS scenarios: the NeMHIP protocol. This approach provides a secure management of network mobility in an efficient manner. In this article, we present this protocol and the strategy developed to maintain its security and efficiency in satisfactory levels. We also present the developed analytical models to analyze quantitatively the efficiency of the protocol. More specifically, we have developed models for assessing it in terms of signaling cost, which demonstrates that NeMHIP generates up to 73.47% less signaling compared to other relevant approaches. Therefore, the results obtained demonstrate that NeMHIP is the most efficient and secure solution for providing communications in mobile network scenarios such as in an ITS context.
Resumo:
After 20 annual meetings it is worth to have a look back and to see how it has started. There has been very little collaboration on research projects between member institutes under the auspices of WEFTA, co-operation in more neutral areas of common interest was developed at an early stage. The area which has proved very fruitful is methodology. It was agreed that probably the best way to make progress was to arrange meetings at each laboratory in turn where experienced, practising scientists could describe in detail how they carried out analyses. In this way, difficulties could be demonstrated or uncovered, and the accuracy, precision, efficiency and cost of the methods used in different laboratories could be compared.
Resumo:
This thesis considers in detail the dynamics of two oscillators with weak nonlinear coupling. There are three classes of such problems: non-resonant, where the Poincaré procedure is valid to the order considered; weakly resonant, where the Poincaré procedure breaks down because small divisors appear (but do not affect the O(1) term) and strongly resonant, where small divisors appear and lead to O(1) corrections. A perturbation method based on Cole's two-timing procedure is introduced. It avoids the small divisor problem in a straightforward manner, gives accurate answers which are valid for long times, and appears capable of handling all three types of problems with no change in the basic approach.
One example of each type is studied with the aid of this procedure: for the nonresonant case the answer is equivalent to the Poincaré result; for the weakly resonant case the analytic form of the answer is found to depend (smoothly) on the difference between the initial energies of the two oscillators; for the strongly resonant case we find that the amplitudes of the two oscillators vary slowly with time as elliptic functions of ϵ t, where ϵ is the (small) coupling parameter.
Our results suggest that, as one might expect, the dynamical behavior of such systems varies smoothly with changes in the ratio of the fundamental frequencies of the two oscillators. Thus the pathological behavior of Whittaker's adelphic integrals as the frequency ratio is varied appears to be due to the fact that Whittaker ignored the small divisor problem. The energy sharing properties of these systems appear to depend strongly on the initial conditions, so that the systems not ergodic.
The perturbation procedure appears to be applicable to a wide variety of other problems in addition to those considered here.
Resumo:
Deference to committees in Congress has been a much studied phenomena for close to 100 years. This deference can be characterized as the unwillingness of a potentially winning coalition on the House floor to impose its will on a small minority, a standing committee. The congressional scholar is then faced with two problems: observing such deference to committees, and explaining it. Shepsle and Weingast have proposed the existence of an ex-post veto for standing committees as an explanation of committee deference. They claim that as conference reports in the House and Senate are considered under a rule that does not allow amendments, the conferees enjoy agenda-setting power. In this paper I describe a test of such a hypothesis (along with competing hypotheses regarding the effects of the conference procedure). A random-utility model is utilized to estimate legislators' ideal points on appropriations bills from 1973 through 1980. I prove two things: 1) that committee deference can not be said to be a result of the conference procedure; and moreover 2) that committee deference does not appear to exist at all.