856 resultados para REPRODUCIBILITY OF RESULTS
Resumo:
The toxicity of sediments in Sabine Lake, Texas, and adjoining Intracoastal Waterway canals was determined as part of bioeffects assessment studies managed by NOAA’s National Status and Trends Program. The objectives of the survey were to determine: (1) the incidence and degree of toxicity of sediments throughout the study area; (2) the spatial patterns (or gradients) in chemical contamination and toxicity, if any, throughout the study area; (3) the spatial extent of chemical contamination and toxicity; and (4) the statistical relationships between measures of toxicity and concentrations of chemicals in the sediments. Surficial sediment samples were collected during August, 1995 from 66 randomly-chosen locations. Laboratory toxicity tests were performed as indicators of potential ecotoxicological effects in sediments. A battery of tests was performed to generate information from different phases (components) of the sediments. Tests were selected to represent a range in toxicological endpoints from acute to chronic sublethal responses. Toxicological tests were conducted to measure: reduced survival of adult amphipods exposed to solid-phase sediments; impaired fertilization success and abnormal morphological development in gametes and embryos, respectively, of sea urchins exposed to pore waters; reduced metabolic activity of a marine bioluminescent bacteria exposed to organic solvent extracts; and induction of a cytochrome P-450 reporter gene system in exposures to solvent extracts of the sediments. Chemical analyses were performed on portions of each sample to quantify the concentrations of trace metals, polynuclear aromatic hydrocarbons, and chlorinated organic compounds. Correlation analyses were conducted to determine the relationships between measures of toxicity and concentrations of potentially toxic substances in the samples. Based upon the compilation of results from chemical analyses and toxicity tests, the quality of sediments in Sabine Lake and vicinity did not appear to be severely degraded. Chemical concentrations rarely exceeded effects-based numerical guidelines, suggesting that toxicant-induced effects would not be expected in most areas. None of the samples was highly toxic in acute amphipod survival tests and a minority (23%) of samples were highly toxic in sublethal urchin fertilization tests. Although toxic responses occurred frequently (94% of samples) in urchin embryo development tests performed with 100% pore waters, toxicity diminished markedly in tests done with diluted pore waters. Microbial bioluminescent activity was not reduced to a great degree (no EC50 <0.06 mg/ml) and cytochrome P-450 activity was not highly induced (6 samples exceeded 37.1 ug/g benzo[a]pyrene equivalents) in tests done with organic solvent extracts. Urchin embryological development was highly correlated with concentrations of ammonia and many trace metals. Cytochrome P450 induction was highly correlated with concentrations of a number of classes of organic compounds (including the polynuclear aromatic hydrocarbons and chlorinated compounds). (PDF contains 51 pages)
Resumo:
An investigation was conducted into the deaths of more than 220 bottlenose dolphins (Tursiops truncatus) that occurred within the coastal bay ecosystem of mid-Texas between January and May 1992. The high mortality rate was unusual in that it was limited to a relatively small geographical area, occurred primarily within an inshore bay system separated from the Gulf of Mexico by barrier islands, and coincided with deaths of other taxa including birds and fish. Factors examined to determine the potential causes of the dolphin mortalities included microbial pathogens, natural biotoxins, industrial pollutants, other environmental contaminants, and direct human interactions. Emphasis was placed on nonpoint source pesticide runoff from agricultural areas, which had resulted from record rainfall that occurred during the period of increased mortality. Analytical results from sediment, water, and biota indicated that biotoxins, trace metals, and industrial chemical contamination were not likely causative factors in this mortality event. Elevated concentrations of pesticides (atrazine and aldicarb) were detected in surface water samples from bays within the region, and bay salinities were reduced to <10 ppt from December 1991 through April 1992 due to record rainfall and freshwater runoff exceeding any levels since 1939. Prolonged exposure to low salinity could have played a significant role in the unusual mortalities because low salinity exposure may cause disruption of the permeability barrier in dolphin skin. The lack of established toxicity data for marine mammals, particularly dermal absorption and bioaccumulation, precludes accurate toxicological interpretation of results beyond a simple comparison to terrestrial mammalian models. Results clearly indicated that significant periods of agricultural runoff and accompanying low salinities co-occurred with the unusual mortality event in Texas, but no definitive cause of the mortalities was determined. (PDF file contains 25 pages.)
Resumo:
[EN] Background: Polymerase Chain Reaction (PCR) and Restriction Fragment Length Polymorphism of PCR products (PCR-RFLP) are extensively used molecular biology techniques. An exercise for the design and simulation of PCR and PCR-RFLP experiments will be a useful educational tool. Findings: An online PCR and PCR-RFLP exercise has been create that requires users to find the target genes,compare them, design primers, search for restriction endonucleases, and finally to simulate the experiment. Each user of the service is randomly assigned a gene from Escherichia coli; to complete the exercise, users must design an experiment capable of distinguishing among E. coli strains. By applying the experimental procedure to all completely sequenced E. coli, a basic understanding of strain comparison and clustering can also be acquired. Comparison of results obtained in different experiments is also very instructive. Conclusions: The exercise is freely available at http://insilico.ehu.es/edu.
Resumo:
We have applied the Schwinger Multichannel Method(SMC) to the study of electronically inelastic, low energy electron-molecule collisions. The focus of these studies has been the assessment of the importance of multichannel coupling to the dynamics of these excitation processes. It has transpired that the promising quality of results realized in early SMC work on such inelastic scattering processes has been far more difficult to obtain in these more sophisticated studies.
We have attempted to understand the sources of instability of the SMC method which are evident in these multichannel studies. Particular instances of such instability have been considered in detail, which indicate that linear dependence, failure of the separable potential approximation, and difficulties in converging matrix elements involving recorrelation or Q-space terms all conspire to complicate application of the SMC method to these studies. A method involving singular value decomposition(SVD) has been developed to, if not resolve these problems, at least mitigate their deleterious effects on the computation of electronically inelastic cross sections.
In conjunction with this SVD procedure, the SMC method has been applied to the study of the H_2 , H_2O, and N_2 molecules. Rydberg excitations of the first two molecules were found to be most sensitive to multichannel coupling near threshold. The (3σ_g → 1π_g ) and (1π_u → 1π_g) valence excitations of the N_2 molecule were found to be strongly influenced by the choice of channel coupling scheme at all collision energies considered in these studies.
Resumo:
25 p.
Resumo:
31 p.
Resumo:
The author explains some aspects of sampling phytoplankton blooms and the evaluation of results obtained from different methods. Qualitative and quantitative sampling is covered as well as filtration, freeze-drying and toxin separation.
Resumo:
The influence of composition on the structure and on the electric and magnetic properties of amorphous Pd-Mn-P and Pd-Co-P prepared by rapid quenching techniques were investigated in terms of (1) the 3d band filling of the first transition metal group, (2) the phosphorus concentration effect which acts as an electron donor and (3) the transition metal concentration.
The structure is essentially characterized by a set of polyhedra subunits essentially inverse to the packing of hard spheres in real space. Examination of computer generated distribution functions using Monte Carlo random statistical distribution of these polyhedra entities demonstrated tile reproducibility of the experimentally calculated atomic distribution function. As a result, several possible "structural parameters" are proposed such as: the number of nearest neighbors, the metal-to-metal distance, the degree of short-range order and the affinity between metal-metal and metal-metalloid. It is shown that the degree of disorder increases from Ni to Mn. Similar behavior is observed with increase in the phosphorus concentration.
The magnetic properties of Pd-Co-P alloys show that they are ferromagnetic with a Curie temperature between 272 and 399°K as the cobalt concentration increases from 15 to 50 at.%. Below 20 at.% Co the short-range exchange interactions which produce the ferromagnetism are unable to establish a long-range magnetic order and a peak in the magnetization shows up at the lowest temperature range . The electric resistivity measurements were performed from liquid helium temperatures up to the vicinity of the melting point (900°K). The thermomagnetic analysis was carried out under an applied field of 6.0 kOe. The electrical resistivity of Pd-Co-P shows the coexistence of a Kondo-like minimum with ferromagnetism. The minimum becomes less important as the transition metal concentration increases and the coefficients of ℓn T and T^2 become smaller and strongly temperature dependent. The negative magnetoresistivity is a strong indication of the existence of localized moment.
The temperature coefficient of resistivity which is positive for Pd- Fe-P, Pd-Ni-P, and Pd-Co-P becomes negative for Pd-Mn-P. It is possible to account for the negative temperature dependence by the localized spin fluctuation model and the high density of states at the Fermi energy which becomes maximum between Mn and Cr. The magnetization curves for Pd-Mn-P are typical of those resulting from the interplay of different exchange forces. The established relationship between susceptibility and resistivity confirms the localized spin fluctuation model. The magnetoresistivity of Pd-Mn-P could be interpreted in tenns of a short-range magnetic ordering that could arise from the Rudennan-Kittel type interactions.
Resumo:
A study is made of the accuracy of electronic digital computer calculations of ground displacement and response spectra from strong-motion earthquake accelerograms. This involves an investigation of methods of the preparatory reduction of accelerograms into a form useful for the digital computation and of the accuracy of subsequent digital calculations. Various checks are made for both the ground displacement and response spectra results, and it is concluded that the main errors are those involved in digitizing the original record. Differences resulting from various investigators digitizing the same experimental record may become as large as 100% of the maximum computed ground displacements. The spread of the results of ground displacement calculations is greater than that of the response spectra calculations. Standardized methods of adjustment and calculation are recommended, to minimize such errors.
Studies are made of the spread of response spectral values about their mean. The distribution is investigated experimentally by Monte Carlo techniques using an electric analog system with white noise excitation, and histograms are presented indicating the dependence of the distribution on the damping and period of the structure. Approximate distributions are obtained analytically by confirming and extending existing results with accurate digital computer calculations. A comparison of the experimental and analytical approaches indicates good agreement for low damping values where the approximations are valid. A family of distribution curves to be used in conjunction with existing average spectra is presented. The combination of analog and digital computations used with Monte Carlo techniques is a promising approach to the statistical problems of earthquake engineering.
Methods of analysis of very small earthquake ground motion records obtained simultaneously at different sites are discussed. The advantages of Fourier spectrum analysis for certain types of studies and methods of calculation of Fourier spectra are presented. The digitizing and analysis of several earthquake records is described and checks are made of the dependence of results on digitizing procedure, earthquake duration and integration step length. Possible dangers of a direct ratio comparison of Fourier spectra curves are pointed out and the necessity for some type of smoothing procedure before comparison is established. A standard method of analysis for the study of comparative ground motion at different sites is recommended.
Resumo:
Let L be the algebra of all linear transformations on an n-dimensional vector space V over a field F and let A, B, ƐL. Let Ai+1 = AiB - BAi, i = 0, 1, 2,…, with A = Ao. Let fk (A, B; σ) = A2K+1 - σ1A2K-1 + σ2A2K-3 -… +(-1)KσKA1 where σ = (σ1, σ2,…, σK), σi belong to F and K = k(k-1)/2. Taussky and Wielandt [Proc. Amer. Math. Soc., 13(1962), 732-735] showed that fn(A, B; σ) = 0 if σi is the ith elementary symmetric function of (β4- βs)2, 1 ≤ r ˂ s ≤ n, i = 1, 2, …, N, with N = n(n-1)/2, where β4 are the characteristic roots of B. In this thesis we discuss relations involving fk(X, Y; σ) where X, Y Ɛ L and 1 ≤ k ˂ n. We show: 1. If F is infinite and if for each X Ɛ L there exists σ so that fk(A, X; σ) = 0 where 1 ≤ k ˂ n, then A is a scalar transformation. 2. If F is algebraically closed, a necessary and sufficient condition that there exists a basis of V with respect to which the matrices of A and B are both in block upper triangular form, where the blocks on the diagonals are either one- or two-dimensional, is that certain products X1, X2…Xr belong to the radical of the algebra generated by A and B over F, where Xi has the form f2(A, P(A,B); σ), for all polynomials P(x, y). We partially generalize this to the case where the blocks have dimensions ≤ k. 3. If A and B generate L, if the characteristic of F does not divide n and if there exists σ so that fk(A, B; σ) = 0, for some k with 1 ≤ k ˂ n, then the characteristic roots of B belong to the splitting field of gk(w; σ) = w2K+1 - σ1w2K-1 + σ2w2K-3 - …. +(-1)K σKw over F. We use this result to prove a theorem involving a generalized form of property L [cf. Motzkin and Taussky, Trans. Amer. Math. Soc., 73(1952), 108-114]. 4. Also we give mild generalizations of results of McCoy [Amer. Math. Soc. Bull., 42(1936), 592-600] and Drazin [Proc. London Math. Soc., 1(1951), 222-231].
Resumo:
My focus in this thesis is to contribute to a more thorough understanding of the mechanics of ice and deformable glacier beds. Glaciers flow under their own weight through a combination of deformation within the ice column and basal slip, which involves both sliding along and deformation within the bed. Deformable beds, which are made up of unfrozen sediment, are prevalent in nature and are often the primary contributors to ice flow wherever they are found. Their granular nature imbues them with unique mechanical properties that depend on the granular structure and hydrological properties of the bed. Despite their importance for understanding glacier flow and the response of glaciers to changing climate, the mechanics of deformable glacier beds are not well understood.
Our general approach to understanding the mechanics of bed deformation and their effect on glacier flow is to acquire synoptic observations of ice surface velocities and their changes over time and to use those observations to infer the mechanical properties of the bed. We focus on areas where changes in ice flow over time are due to known environmental forcings and where the processes of interest are largely isolated from other effects. To make this approach viable, we further develop observational methods that involve the use of mapping radar systems. Chapters 2 and 5 focus largely on the development of these methods and analysis of results from ice caps in central Iceland and an ice stream in West Antarctica. In Chapter 3, we use these observations to constrain numerical ice flow models in order to study the mechanics of the bed and the ice itself. We show that the bed in an Iceland ice cap deforms plastically and we derive an original mechanistic model of ice flow over plastically deforming beds that incorporates changes in bed strength caused by meltwater flux from the surface. Expanding on this work in Chapter 4, we develop a more detailed mechanistic model for till-covered beds that helps explain the mechanisms that cause some glaciers to surge quasi-periodically. In Antarctica, we observe and analyze the mechanisms that allow ocean tidal variations to modulate ice stream flow tens of kilometers inland. We find that the ice stream margins are significantly weakened immediately upstream of the area where ice begins to float and that this weakening likely allows changes in stress over the floating ice to propagate through the ice column.
Resumo:
提出一种新的步进扫描投影光刻机工件台方镜不平度测量方法。以方镜平移补偿量与旋转补偿量为测量目标,使用两个双频激光干涉仪分别测量工件台在x和y方向的位置和旋转量;将方镜不平度的测量按照一定的偏移量分成若干个序列,每一个序列包括对方镜有效区域的若干次往返测量;根据所有序列的测量结果计算出方镜的旋转补偿量;为每一个序列建立临时边界条件,并据此计算出每一序列所测得的方镜粗略平移补偿量;采用三次样条插值与最小二乘法建立每一个序列间的关系,以平滑连接所有测量序列得到精确的方镜平移补偿量。结果表明,该方法用于测量方镜平
Resumo:
提出一种精确检测光刻机激光干涉仪测量系统非正交性的新方法。将对准标记曝光到硅片表面并进行显影;利用光学对准系统测量曝光到硅片上的对准标记理论曝光位置与实际读取位置的偏差;由推导的位置偏差与非正交因子、坐标轴尺度比例、过程引入误差的线性模型,根据最小二乘原理计算出干涉仪测量系统的非正交性。实验结果表明,利用该方法使用同一硅片在不同旋转角下进行测量,干涉仪测量系统非正交因子的测量重复精度优于0.01 μrad,坐标轴尺度比例的测量重复精度优于0.7×10-6。使用不同的硅片进行测量,非正交因子的测量再现性优于
Resumo:
The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.
The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.
We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.
Resumo:
Os INDELs são polimorfismos de comprimento, gerados a partir de inserções e/ou deleções de um ou mais nucleotídeos. Os marcadores INDELs, que estão amplamente distribuídos pelo genoma e se caracterizam pela alta estabilidade devido à baixa taxa mutacional (10-9), podem ser analisados a partir da amplificação por PCR (Reação em Cadeia da Polimerase). A facilidade de análise e a possibilidade de construção de sistemas multiplex capazes de gerar amplicons curtos (menores que 100 pb) tornam os INDELs uma importante ferramenta para identificação humana por DNA. Para avaliar a eficiência e validar a metodologia que emprega os polimorfismos de inserção/deleção na identificação humana, utilizamos o sistema Indel-plex ID, capaz de amplificar simultaneamente 38 loci INDELs bialélicos de cromossomos autossomos. Diferentes amostras biológicas (cabelo, saliva, sangue, sêmen e urina) foram genotipadas apresentando reprodutibilidade entre todas as tipagens. A concentração mínima de DNA necessária para amplificação dos 38 loci INDELs foi de 0,5 ng. Artefatos do tipo split peaks foram observados em algumas amostras. Os produtos da PCR foram purificados em resina Sephadex proporcionando melhores condições de análise, redução de artefatos e aumento na intensidade média de fluorescência dos alelos amplificados. A eficiência do sistema Indel-plex ID na amplificação de DNA degradado foi verificada durante as análises das amostras de DNA extraídas de restos mortais (ossos e dentes). Comparativamente ao sistema Identifiler, o Indel-plex ID, se mostrou mais eficiente em termos de número de loci genotipados e qualidade de amplificação. Nas investigações de vínculos genéticos realizadas com o sistema Indel-plex ID foi possível corroborar resultados anteriores obtidos pela análise de marcadores STR. Nas análises com amostras in vivo foram obtidos valores máximos de Probabilidades de Paternidade de 99,99998%. Para casos envolvendo supostos pais falecidos, o sistema Indel-plex ID reforçou resultados obtidos com o sistema Identifiler e Minifiler. A Probabilidade de Paternidade de 99,953%, obtida com o sistema Indel-plex ID, conjugada com a Probabilidade de Paternidade de 99,957%, obtida como o sistema Minifiler, possibilitou um índice final de 99,99998%. Os resultados evidenciaram que os loci INDELs do sistema Indel-plex ID são altamente informativos, constituindo uma ferramenta importante em estudos de identificação humana e de relações de parentesco