922 resultados para Best-known bounds
Resumo:
Understanding fluctuations in tropical cyclone activity along United States shores and abroad becomes increasingly important as coastal managers and planners seek to save lives, mitigate damage, and plan for resilience in the face of changing storminess and sea-level rise. Tropical cyclone activity has long been of concern to coastal areas as they bring strong winds, heavy rains, and high seas. Given projections of a warming climate, current estimates suggest that not only will tropical cyclones increase in frequency, but also in intensity (maximum sustained winds and minimum central pressures). An understanding of what has happened historically is an important step in identifying potential future changes in tropical cyclone frequency and intensity. The ability to detect such changes depends on a consistent and reliable global tropical cyclone dataset. Until recently no central repository for historical tropical cyclone data existed. To fill this need, the International Best Track Archive for Climate Stewardship (IBTrACS) dataset was developed to collect all known global historical tropical cyclone data into a single source for dissemination. With this dataset, a global examination of changes in tropical cyclone frequency and intensity can be performed. Caveats apply to any historical tropical cyclone analysis however, as the data contributed to the IBTrACS archive from various tropical cyclone warning centers is still replete with biases that may stem from operational changes, inhomogeneous monitoring programs, and time discontinuities. A detailed discussion of the difficulties in detecting trends using tropical cyclone data can be found in Landsea et al. 2006. The following sections use the IBTrACS dataset to show the global spatial variability of tropical cyclone frequency and intensity. Analyses will show where the strongest storms typically occur, the regions with the highest number of tropical cyclones per decade, and the locations of highest average maximum wind speeds. (PDF contains 3 pages)
Resumo:
The dynamic properties of a structure are a function of its physical properties, and changes in the physical properties of the structure, including the introduction of structural damage, can cause changes in its dynamic behavior. Structural health monitoring (SHM) and damage detection methods provide a means to assess the structural integrity and safety of a civil structure using measurements of its dynamic properties. In particular, these techniques enable a quick damage assessment following a seismic event. In this thesis, the application of high-frequency seismograms to damage detection in civil structures is investigated.
Two novel methods for SHM are developed and validated using small-scale experimental testing, existing structures in situ, and numerical testing. The first method is developed for pre-Northridge steel-moment-resisting frame buildings that are susceptible to weld fracture at beam-column connections. The method is based on using the response of a structure to a nondestructive force (i.e., a hammer blow) to approximate the response of the structure to a damage event (i.e., weld fracture). The method is applied to a small-scale experimental frame, where the impulse response functions of the frame are generated during an impact hammer test. The method is also applied to a numerical model of a steel frame, in which weld fracture is modeled as the tensile opening of a Mode I crack. Impulse response functions are experimentally obtained for a steel moment-resisting frame building in situ. Results indicate that while acceleration and velocity records generated by a damage event are best approximated by the acceleration and velocity records generated by a colocated hammer blow, the method may not be robust to noise. The method seems to be better suited for damage localization, where information such as arrival times and peak accelerations can also provide indication of the damage location. This is of significance for sparsely-instrumented civil structures.
The second SHM method is designed to extract features from high-frequency acceleration records that may indicate the presence of damage. As short-duration high-frequency signals (i.e., pulses) can be indicative of damage, this method relies on the identification and classification of pulses in the acceleration records. It is recommended that, in practice, the method be combined with a vibration-based method that can be used to estimate the loss of stiffness. Briefly, pulses observed in the acceleration time series when the structure is known to be in an undamaged state are compared with pulses observed when the structure is in a potentially damaged state. By comparing the pulse signatures from these two situations, changes in the high-frequency dynamic behavior of the structure can be identified, and damage signals can be extracted and subjected to further analysis. The method is successfully applied to a small-scale experimental shear beam that is dynamically excited at its base using a shake table and damaged by loosening a screw to create a moving part. Although the damage is aperiodic and nonlinear in nature, the damage signals are accurately identified, and the location of damage is determined using the amplitudes and arrival times of the damage signal. The method is also successfully applied to detect the occurrence of damage in a test bed data set provided by the Los Alamos National Laboratory, in which nonlinear damage is introduced into a small-scale steel frame by installing a bumper mechanism that inhibits the amount of motion between two floors. The method is successfully applied and is robust despite a low sampling rate, though false negatives (undetected damage signals) begin to occur at high levels of damage when the frequency of damage events increases. The method is also applied to acceleration data recorded on a damaged cable-stayed bridge in China, provided by the Center of Structural Monitoring and Control at the Harbin Institute of Technology. Acceleration records recorded after the date of damage show a clear increase in high-frequency short-duration pulses compared to those previously recorded. One undamage pulse and two damage pulses are identified from the data. The occurrence of the detected damage pulses is consistent with a progression of damage and matches the known chronology of damage.
Resumo:
The concept of a "projection function" in a finite-dimensional real or complex normed linear space H (the function PM which carries every element into the closest element of a given subspace M) is set forth and examined.
If dim M = dim H - 1, then PM is linear. If PN is linear for all k-dimensional subspaces N, where 1 ≤ k < dim M, then PM is linear.
The projective bound Q, defined to be the supremum of the operator norm of PM for all subspaces, is in the range 1 ≤ Q < 2, and these limits are the best possible. For norms with Q = 1, PM is always linear, and a characterization of those norms is given.
If H also has an inner product (defined independently of the norm), so that a dual norm can be defined, then when PM is linear its adjoint PMH is the projection on (kernel PM)⊥ by the dual norm. The projective bounds of a norm and its dual are equal.
The notion of a pseudo-inverse F+ of a linear transformation F is extended to non-Euclidean norms. The distance from F to the set of linear transformations G of lower rank (in the sense of the operator norm ∥F - G∥) is c/∥F+∥, where c = 1 if the range of F fills its space, and 1 ≤ c < Q otherwise. The norms on both domain and range spaces have Q = 1 if and only if (F+)+ = F for every F. This condition is also sufficient to prove that we have (F+)H = (FH)+, where the latter pseudo-inverse is taken using dual norms.
In all results, the real and complex cases are handled in a completely parallel fashion.
Resumo:
This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.
Resumo:
Mitochondria can remodel their membranes by fusing or dividing. These processes are required for the proper development and viability of multicellular organisms. At the cellular level, fusion is important for mitochondrial Ca2+ homeostasis, mitochondrial DNA maintenance, mitochondrial membrane potential, and respiration. Mitochondrial division, which is better known as fission, is important for apoptosis, mitophagy, and for the proper allocation of mitochondria to daughter cells during cellular division.
The functions of proteins involved in fission have been best characterized in the yeast model organism Sarccharomyces cerevisiae. Mitochondrial fission in mammals has some similarities. In both systems, a cytosolic dynamin-like protein, called Dnm1 in yeast and Drp1 in mammals, must be recruited to the mitochondrial surface and polymerized to promote membrane division. Recruitment of yeast Dnm1 requires only one mitochondrial outer membrane protein, named Fis1. Fis1 is conserved in mammals, but its importance for Drp1 recruitment is minor. In mammals, three other receptor proteins—Mff, MiD49, and MiD51—play a major role in recruiting Drp1 to mitochondria. Why mammals require three additional receptors, and whether they function together or separately, are fundamental questions for understanding the mechanism of mitochondrial fission in mammals.
We have determined that Mff, MiD49, or MiD51 can function independently of one another to recruit Drp1 to mitochondria. Fis1 plays a minor role in Drp1 recruitment, suggesting that the emergence of these additional receptors has replaced the system used by yeast. Additionally, we found that Fis1/Mff and the MiDs regulate Drp1 activity differentially. Fis1 and Mff promote constitutive mitochondrial fission, whereas the MiDs activate recruited Drp1 only during loss of respiration.
To better understand the function of the MiDs, we have determined the atomic structure of the cytoplasmic domain of MiD51, and performed a structure-function analysis of MiD49 based on its homology to MiD51. MiD51 adopts a nucleotidyl transferase fold, and binds ADP as a co-factor that is essential for its function. Both MiDs contain a loop segment that is not present in other nucleotidyl transferase proteins, and this loop is used to interact with Drp1 and to recruit it to mitochondria.
Resumo:
Computational protein design (CPD) is a burgeoning field that uses a physical-chemical or knowledge-based scoring function to create protein variants with new or improved properties. This exciting approach has recently been used to generate proteins with entirely new functions, ones that are not observed in naturally occurring proteins. For example, several enzymes were designed to catalyze reactions that are not in the repertoire of any known natural enzyme. In these designs, novel catalytic activity was built de novo (from scratch) into a previously inert protein scaffold. In addition to de novo enzyme design, the computational design of protein-protein interactions can also be used to create novel functionality, such as neutralization of influenza. Our goal here was to design a protein that can self-assemble with DNA into nanowires. We used computational tools to homodimerize a transcription factor that binds a specific sequence of double-stranded DNA. We arranged the protein-protein and protein-DNA binding sites so that the self-assembly could occur in a linear fashion to generate nanowires. Upon mixing our designed protein homodimer with the double-stranded DNA, the molecules immediately self-assembled into nanowires. This nanowire topology was confirmed using atomic force microscopy. Co-crystal structure showed that the nanowire is assembled via the desired interactions. To the best of our knowledge, this is the first example of a protein-DNA self-assembly that does not rely on covalent interactions. We anticipate that this new material will stimulate further interest in the development of advanced biomaterials.
Resumo:
This thesis is a theoretical work on the space-time dynamic behavior of a nuclear reactor without feedback. Diffusion theory with G-energy groups is used.
In the first part the accuracy of the point kinetics (lumped-parameter description) model is examined. The fundamental approximation of this model is the splitting of the neutron density into a product of a known function of space and an unknown function of time; then the properties of the system can be averaged in space through the use of appropriate weighting functions; as a result a set of ordinary differential equations is obtained for the description of time behavior. It is clear that changes of the shape of the neutron-density distribution due to space-dependent perturbations are neglected. This results to an error in the eigenvalues and it is to this error that bounds are derived. This is done by using the method of weighted residuals to reduce the original eigenvalue problem to that of a real asymmetric matrix. Then Gershgorin-type theorems .are used to find discs in the complex plane in which the eigenvalues are contained. The radii of the discs depend on the perturbation in a simple manner.
In the second part the effect of delayed neutrons on the eigenvalues of the group-diffusion operator is examined. The delayed neutrons cause a shifting of the prompt-neutron eigenvalue s and the appearance of the delayed eigenvalues. Using a simple perturbation method this shifting is calculated and the delayed eigenvalues are predicted with good accuracy.
Resumo:
Bulk n-lnSb is investigated at a heterodyne detector for the submillimeter wavelength region. Two modes or operation are investigated: (1) the Rollin or hot electron bolometer mode (zero magnetic field), and (2) the Putley mode (quantizing magnetic field). The highlight of the thesis work is the pioneering demonstration or the Putley mode mixer at several frequencies. For example, a double-sideband system noise temperature of about 510K was obtained using a 812 GHz methanol laser for the local oscillator. This performance is at least a factor or 10 more sensitive than any other performance reported to date at the same frequency. In addition, the Putley mode mixer achieved system noise temperatures of 250K at 492 GHz and 350K at 625 GHz. The 492 GHz performance is about 50% better and the 625 GHz is about 100% better than previous best performances established by the Rollin-mode mixer. To achieve these results, it was necessary to design a totally new ultra-low noise, room-temperature preamp to handle the higher source impedance imposed by the Putley mode operation. This preamp has considerably less input capacitance than comparably noisy, ambient designs.
In addition to advancing receiver technology, this thesis also presents several novel results regarding the physics of n-lnSb at low temperatures. A Fourier transform spectrometer was constructed and used to measure the submillimeter wave absorption coefficient of relatively pure material at liquid helium temperatures and in zero magnetic field. Below 4.2K, the absorption coefficient was found to decrease with frequency much faster than predicted by Drudian theory. Much better agreement with experiment was obtained using a quantum theory based on inverse-Bremmstrahlung in a solid. Also the noise of the Rollin-mode detector at 4.2K was accurately measured and compared with theory. The power spectrum is found to be well fit by a recent theory of non- equilibrium noise due to Mather. Surprisingly, when biased for optimum detector performance, high purity lnSb cooled to liquid helium temperatures generates less noise than that predicted by simple non-equilibrium Johnson noise theory alone. This explains in part the excellent performance of the Rollin-mode detector in the millimeter wavelength region.
Again using the Fourier transform spectrometer, spectra are obtained of the responsivity and direct detection NEP as a function of magnetic field in the range 20-110 cm-1. The results show a discernable peak in the detector response at the conduction electron cyclotron resonance frequency tor magnetic fields as low as 3 KG at bath temperatures of 2.0K. The spectra also display the well-known peak due to the cyclotron resonance of electrons bound to impurity states. The magnitude of responsivity at both peaks is roughly constant with magnet1c field and is comparable to the low frequency Rollin-mode response. The NEP at the peaks is found to be much better than previous values at the same frequency and comparable to the best long wavelength results previously reported. For example, a value NEP=4.5x10-13/Hz1/2 is measured at 4.2K, 6 KG and 40 cm-1. Study of the responsivity under conditions of impact ionization showed a dramatic disappearance of the impurity electron resonance while the conduction electron resonance remained constant. This observation offers the first concrete evidence that the mobility of an electron in the N=0 and N=1 Landau levels is different. Finally, these direct detection experiments indicate that the excellent heterodyne performance achieved at 812 GHz should be attainable up to frequencies of at least 1200 GHz.
Resumo:
There is a growing interest in taking advantage of possible patterns and structures in data so as to extract the desired information and overcome the curse of dimensionality. In a wide range of applications, including computer vision, machine learning, medical imaging, and social networks, the signal that gives rise to the observations can be modeled to be approximately sparse and exploiting this fact can be very beneficial. This has led to an immense interest in the problem of efficiently reconstructing a sparse signal from limited linear observations. More recently, low-rank approximation techniques have become prominent tools to approach problems arising in machine learning, system identification and quantum tomography.
In sparse and low-rank estimation problems, the challenge is the inherent intractability of the objective function, and one needs efficient methods to capture the low-dimensionality of these models. Convex optimization is often a promising tool to attack such problems. An intractable problem with a combinatorial objective can often be "relaxed" to obtain a tractable but almost as powerful convex optimization problem. This dissertation studies convex optimization techniques that can take advantage of low-dimensional representations of the underlying high-dimensional data. We provide provable guarantees that ensure that the proposed algorithms will succeed under reasonable conditions, and answer questions of the following flavor:
- For a given number of measurements, can we reliably estimate the true signal?
- If so, how good is the reconstruction as a function of the model parameters?
More specifically, i) Focusing on linear inverse problems, we generalize the classical error bounds known for the least-squares technique to the lasso formulation, which incorporates the signal model. ii) We show that intuitive convex approaches do not perform as well as expected when it comes to signals that have multiple low-dimensional structures simultaneously. iii) Finally, we propose convex relaxations for the graph clustering problem and give sharp performance guarantees for a family of graphs arising from the so-called stochastic block model. We pay particular attention to the following aspects. For i) and ii), we aim to provide a general geometric framework, in which the results on sparse and low-rank estimation can be obtained as special cases. For i) and iii), we investigate the precise performance characterization, which yields the right constants in our bounds and the true dependence between the problem parameters.
Resumo:
Network information theory and channels with memory are two important but difficult frontiers of information theory. In this two-parted dissertation, we study these two areas, each comprising one part. For the first area we study the so-called entropy vectors via finite group theory, and the network codes constructed from finite groups. In particular, we identify the smallest finite group that violates the Ingleton inequality, an inequality respected by all linear network codes, but not satisfied by all entropy vectors. Based on the analysis of this group we generalize it to several families of Ingleton-violating groups, which may be used to design good network codes. Regarding that aspect, we study the network codes constructed with finite groups, and especially show that linear network codes are embedded in the group network codes constructed with these Ingleton-violating families. Furthermore, such codes are strictly more powerful than linear network codes, as they are able to violate the Ingleton inequality while linear network codes cannot. For the second area, we study the impact of memory to the channel capacity through a novel communication system: the energy harvesting channel. Different from traditional communication systems, the transmitter of an energy harvesting channel is powered by an exogenous energy harvesting device and a finite-sized battery. As a consequence, each time the system can only transmit a symbol whose energy consumption is no more than the energy currently available. This new type of power supply introduces an unprecedented input constraint for the channel, which is random, instantaneous, and has memory. Furthermore, naturally, the energy harvesting process is observed causally at the transmitter, but no such information is provided to the receiver. Both of these features pose great challenges for the analysis of the channel capacity. In this work we use techniques from channels with side information, and finite state channels, to obtain lower and upper bounds of the energy harvesting channel. In particular, we study the stationarity and ergodicity conditions of a surrogate channel to compute and optimize the achievable rates for the original channel. In addition, for practical code design of the system we study the pairwise error probabilities of the input sequences.
Resumo:
La actividad aseguradora supone la transferencia de riesgos del asegurado al asegurador. El asegurador se compromete al pago de una prestación si el riesgo se realiza. Se produce un cambio en el ciclo productivo. El asegurador vende una cobertura sin conocer el momento y el coste exacto de dicha cobertura. Esta particularidad de la actividad aseguradora explica la necesidad para una entidad aseguradora de ser solvente en cada momento y ante cualquier imprevisto. Por ello, la solvencia de las entidades aseguradoras es un aspecto que se ha ido recogiendo en las distintas normativas que han regulado la actividad aseguradora y al que se ha ido dando cada vez más importancia. Actualmente la legislación vigente en materia de solvencia de las aseguradoras esta regulada por la directiva europea Solvencia I. Esta directiva establece dos conceptos para garantizar la solvencia: las provisiones técnicas y el margen de solvencia. Las provisiones técnicas son las calculadas para garantizar la solvencia estática de la compañía, es decir aquella que hace frente, en un instante temporal determinado, a los compromisos asumidos por la entidad. El margen de solvencia se destina a cubrir la solvencia dinámica, aquella que hace referencia a eventos futuros que puedan afectar la capacidad del asegurador. Sin embargo en una corriente de gestión global del riesgo en la que el sector bancario ya se había adelantado al sector asegurador con la normativa Basilea II, se decidió iniciar un proyecto europeo de reforma de Solvencia I y en noviembre del 2009 se adoptó la directiva 2009/138/CE del parlamento europeo y del consejo, sobre el seguro de vida, el acceso a la actividad de seguro y de reaseguro y su ejercicio mas conocida como Solvencia II. Esta directiva supone un profundo cambio en las reglas actuales de solvencia para las entidades aseguradoras. Este cambio persigue el objetivo de establecer un marco regulador común a nivel europeo que sea más adaptado al perfil de riesgo de cada entidad aseguradora. Esta nueva directiva define dos niveles distintos de capital: el SCR (requerimiento estándar de capital de solvencia) y el MCR (requerimiento mínimo de capital). Para el calculo del SCR se ha establecido que el asegurador tendrá la libertad de elegir entre dos modelos. Un modelo estándar propuesto por la Autoridad Europea de Seguros y Pensiones de Jubilación (EIOPA por sus siglas en inglés), que permitirá un calculo simple, y un modelo interno desarrollado por la propia entidad que deberá ser aprobado por las autoridades competentes. También se contempla la posibilidad de utilizar un modelo mixto que combine ambos, el estándar y el interno. Para el desarrollo del modelo estándar se han realizado una serie de estudios de impacto cuantitativos (QIS). El último estudio (QIS 5) ha sido el que ha planteado de forma más precisa el cálculo del SCR. Plantea unos shocks que se deberán de aplicar al balance de la entidad con el objetivo de estresarlo, y en base a los resultados obtenidos constituir el SCR. El objetivo de este trabajo es realizar una síntesis de las especificaciones técnicas del QIS5 para los seguros de vida y realizar una aplicación práctica para un seguro de vida mixto puro. En la aplicación práctica se determinarán los flujos de caja asociados a este producto para calcular su mejor estimación (Best estimate). Posteriormente se determinará el SCR aplicando los shocks para los riesgos de mortalidad, rescates y gastos. Por último, calcularemos el margen de riesgo asociado al SCR. Terminaremos el presente TFG con unas conclusiones, la bibliografía empleada así como un anexo con las tablas empleadas.
Resumo:
Na matriz energética brasileira, o óleo diesel tem lugar de destaque, porém ainda é comercializado com teores de compostos sulfurados e nitrogenados considerados altos para as legislações ambientais que entrarão em vigor nos próximos anos. Tradicionalmente, a remoção desses compostos de enxofre de correntes de petróleo é realizada por processos de hidrotratamento (HDT). No entanto, devido as características do diesel brasileiro, se faz necessária maior severidade para atingir as novas especificações dos combustíveis. Isto implica em investimentos e custos operacionais crescentes para atender a demanda que se instala. Neste contexto, a adsorção está sendo estudada para a purificação da corrente de óleo diesel oriunda da etapa de hidrotratamento como polimento final para alcançar as especificações mais exigentes. Sabe-se que os adsorventes comerciais apresentam limitações na remoção destes contaminantes e uma alternativa que tem se mostrado promissora é a incorporação de metais de transição na estrutura do sólido. No presente trabalho foram modificados adsorventes comerciais, tais como aluminas, sílica-aluminas e argilas pela introdução dos elementos níquel, colbalto e molibdênio e testado o desempenho dessas modificações frente à adsorção de compostos sulfurados e nitrogenados presentes em um diesel hidrotratado. Foram feitas caracterizações químicas, físicas, texturais e morfológicas dos sólidos com e sem incorporação de metais de transição na estrutura original. Os experimentos de adsorção foram realizados a 40C. Avaliando todos os sólidos, o adsorvente que mostrou o melhor desempenho na remoção de compostos sulfurados e nitrogenados por massa de adsorvente foi a sílica-alumina sem modificações, que foi capaz de remover em torno de 90% de compostos nitrogenados e 55 % de sulfurados para 2 g de sólido / 10 mL de diesel. Para os materiais modificados, observou-se que a incorporação dos metais de transição ocasionou redução da sua área superficial e do volume total de poros. Desta maneira, os efeitos esperados pelas interações entre o sítios metálicos e os compostos de nitrogênio e enxofre foram reduzidos
Resumo:
This partial translation of a larger paper provides taxonomic descriptions of 5 fungal zoospores species: Olpidium vampyrellae, 0. pseudosporearum, 0. leptophrydis, Rhizophidium leptophrydis and Chytridium lateoperculatum.
Resumo:
Magnetic resonance techniques have given us a powerful means for investigating dynamical processes in gases, liquids and solids. Dynamical effects manifest themselves in both resonance line shifts and linewidths, and, accordingly, require detailed analyses to extract desired information. The success of a magnetic resonance experiment depends critically on relaxation mechanisms to maintain thermal equilibrium between spin states. Consequently, there must be an interaction between the excited spin states and their immediate molecular environment which promote changes in spin orientation while excess magnetic energy is coupled into other degrees of freedom by non-radiative processes. This is well known as spin-lattice relaxation. Certain dynamical processes cause fluctuations in the spin state energy levels leading to spin-spin relaxation and, here again, the environment at the molecular level plays a significant role in the magnitude of interaction. Relatively few electron spin relaxation studies of solutions have been conducted and the present work is addressed toward the extension of our knowledge in this area and the retrieval of dynamical information from line shape analyses on a time scale comparable to diffusion controlled phenomena.
Specifically, the electron spin relaxation of three Mn+23d5 complexes, Mn(CH3CN)6+2, MnCl4-2 in acetonitrile has been studied in considerable detail. The effective spin Hamiltonian constants were carefully evaluated under a wide range of experimental conditions. Resonance widths of these Mn+2 complexes were studied in the presence of various excess ligand ions and as a function of concentration, viscosity, temperature and frequency (X-band, ~9.5 Ԍ Hz and K-band, ~35 Ԍ Hz).
A number of interesting conclusions were drawn from these studies. For the Et4NCl-4-2 system several relaxation mechanisms leading to resonance broadening were observed. One source appears to arise through spin-orbit interactions caused by modulation of the ligand field resulting from transient distortions of the complex imparted by solvent fluctuations in the immediate surroundings of the paramagnetic ion. An additional spin relaxation was assigned to the formation of ion pairs [Et4N+…MnCl4-2] and it was possible to estimate the dissociation constant for this specie in acetonitrile.
The Bu4NBr-MnBr4-2 study was considerably more interesting. As in the former case, solvent fluctuations and ion-pairing of the paramagnetic complex [Bu4N+…MnBr4-2] provide significant relaxation for the electronic spin system. Most interesting, without doubt, is the onset of a new relaxation mechanism leading to resonance broadening which is best interpreted as chemical exchange. Thus, assuming that resonance widths were simply governed by electron spin state lifetimes, we were able to extract dynamical information from an interaction in which the initial and final states are the same
MnBr4-2 + Br- = MnBr4-2 + Br-.
The bimolecular rate constants were obtained at six different temperatures and their magnitudes suggested that the exchange is probably diffusion controlled with essentially a zero energy of activation. The most important source of spin relaxation in this system stems directly from dipolar interactions between the manganese 3d5 electrons. Moreover, the dipolar broadening is strongly frequency dependent indicating a deviation between the transverse and longitudinal relaxation times. We are led to the conclusion that the 3d5 spin states of ion-paired MnBr4-2 are significantly correlated so that dynamical processes are also entering the picture. It was possible to estimate the correlation time, Td, characterizing this dynamical process.
In Part II we study nuclear magnetic relaxation of bromine ions in the MnBr4-2-Bu4NBr-acetonitrile system. Essentially we monitor the 79Br and 81Br linewidths in response to the [MnBr4-2]/[Br-] ratio with the express purpose of supporting our contention that exchange is occurring between "free" bromine ions in the solvent and bromine in the first coordination sphere of the paramagnetic anion. The complexity of the system elicited a two-part study: (1) the linewidth behavior of Bu4NBr in anhydrous CH3CN in the absence of MnBr4-2 and (2) in the presence of MnBr4-2. It was concluded in study (1) that dynamical association, Bu4NBr k1= Bu4N+ + Br-, was modulating field-gradient interactions at frequencies high enough to provide an estimation of the unimolecular rate constant, k1. A comparison of the two isotopic bromine linewidth-mole fraction results led to the conclusion that quadrupole interactions provided the dominant relaxation mechanism. In study (2) the "residual" bromine linewidths for both 79Br and 81Br are clearly controlled by quadrupole interactions which appear to be modulated by very rapid dynamical processes other than molecular reorientation. We conclude that the "residual" linewidth has its origin in chemical exchange and that bromine nuclei exchange rapidly between a "free" solvated ion and the paramagnetic complex, MnBr4-2.