947 resultados para Pumping machinery, Electric.


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction 1.1 Occurrence of polycyclic aromatic hydrocarbons (PAH) in the environment Worldwide industrial and agricultural developments have released a large number of natural and synthetic hazardous compounds into the environment due to careless waste disposal, illegal waste dumping and accidental spills. As a result, there are numerous sites in the world that require cleanup of soils and groundwater. Polycyclic aromatic hydrocarbons (PAHs) are one of the major groups of these contaminants (Da Silva et al., 2003). PAHs constitute a diverse class of organic compounds consisting of two or more aromatic rings with various structural configurations (Prabhu and Phale, 2003). Being a derivative of benzene, PAHs are thermodynamically stable. In addition, these chemicals tend to adhere to particle surfaces, such as soils, because of their low water solubility and strong hydrophobicity, and this results in greater persistence under natural conditions. This persistence coupled with their potential carcinogenicity makes PAHs problematic environmental contaminants (Cerniglia, 1992; Sutherland, 1992). PAHs are widely found in high concentrations at many industrial sites, particularly those associated with petroleum, gas production and wood preserving industries (Wilson and Jones, 1993). 1.2 Remediation technologies Conventional techniques used for the remediation of soil polluted with organic contaminants include excavation of the contaminated soil and disposal to a landfill or capping - containment - of the contaminated areas of a site. These methods have some drawbacks. The first method simply moves the contamination elsewhere and may create significant risks in the excavation, handling and transport of hazardous material. Additionally, it is very difficult and increasingly expensive to find new landfill sites for the final disposal of the material. The cap and containment method is only an interim solution since the contamination remains on site, requiring monitoring and maintenance of the isolation barriers long into the future, with all the associated costs and potential liability. A better approach than these traditional methods is to completely destroy the pollutants, if possible, or transform them into harmless substances. Some technologies that have been used are high-temperature incineration and various types of chemical decomposition (for example, base-catalyzed dechlorination, UV oxidation). However, these methods have significant disadvantages, principally their technological complexity, high cost , and the lack of public acceptance. Bioremediation, on the contrast, is a promising option for the complete removal and destruction of contaminants. 1.3 Bioremediation of PAH contaminated soil & groundwater Bioremediation is the use of living organisms, primarily microorganisms, to degrade or detoxify hazardous wastes into harmless substances such as carbon dioxide, water and cell biomass Most PAHs are biodegradable unter natural conditions (Da Silva et al., 2003; Meysami and Baheri, 2003) and bioremediation for cleanup of PAH wastes has been extensively studied at both laboratory and commercial levels- It has been implemented at a number of contaminated sites, including the cleanup of the Exxon Valdez oil spill in Prince William Sound, Alaska in 1989, the Mega Borg spill off the Texas coast in 1990 and the Burgan Oil Field, Kuwait in 1994 (Purwaningsih, 2002). Different strategies for PAH bioremediation, such as in situ , ex situ or on site bioremediation were developed in recent years. In situ bioremediation is a technique that is applied to soil and groundwater at the site without removing the contaminated soil or groundwater, based on the provision of optimum conditions for microbiological contaminant breakdown.. Ex situ bioremediation of PAHs, on the other hand, is a technique applied to soil and groundwater which has been removed from the site via excavation (soil) or pumping (water). Hazardous contaminants are converted in controlled bioreactors into harmless compounds in an efficient manner. 1.4 Bioavailability of PAH in the subsurface Frequently, PAH contamination in the environment is occurs as contaminants that are sorbed onto soilparticles rather than in phase (NAPL, non aqueous phase liquids). It is known that the biodegradation rate of most PAHs sorbed onto soil is far lower than rates measured in solution cultures of microorganisms with pure solid pollutants (Alexander and Scow, 1989; Hamaker, 1972). It is generally believed that only that fraction of PAHs dissolved in the solution can be metabolized by microorganisms in soil. The amount of contaminant that can be readily taken up and degraded by microorganisms is defined as bioavailability (Bosma et al., 1997; Maier, 2000). Two phenomena have been suggested to cause the low bioavailability of PAHs in soil (Danielsson, 2000). The first one is strong adsorption of the contaminants to the soil constituents which then leads to very slow release rates of contaminants to the aqueous phase. Sorption is often well correlated with soil organic matter content (Means, 1980) and significantly reduces biodegradation (Manilal and Alexander, 1991). The second phenomenon is slow mass transfer of pollutants, such as pore diffusion in the soil aggregates or diffusion in the organic matter in the soil. The complex set of these physical, chemical and biological processes is schematically illustrated in Figure 1. As shown in Figure 1, biodegradation processes are taking place in the soil solution while diffusion processes occur in the narrow pores in and between soil aggregates (Danielsson, 2000). Seemingly contradictory studies can be found in the literature that indicate the rate and final extent of metabolism may be either lower or higher for sorbed PAHs by soil than those for pure PAHs (Van Loosdrecht et al., 1990). These contrasting results demonstrate that the bioavailability of organic contaminants sorbed onto soil is far from being well understood. Besides bioavailability, there are several other factors influencing the rate and extent of biodegradation of PAHs in soil including microbial population characteristics, physical and chemical properties of PAHs and environmental factors (temperature, moisture, pH, degree of contamination). Figure 1: Schematic diagram showing possible rate-limiting processes during bioremediation of hydrophobic organic contaminants in a contaminated soil-water system (not to scale) (Danielsson, 2000). 1.5 Increasing the bioavailability of PAH in soil Attempts to improve the biodegradation of PAHs in soil by increasing their bioavailability include the use of surfactants , solvents or solubility enhancers.. However, introduction of synthetic surfactant may result in the addition of one more pollutant. (Wang and Brusseau, 1993).A study conducted by Mulder et al. showed that the introduction of hydropropyl-ß-cyclodextrin (HPCD), a well-known PAH solubility enhancer, significantly increased the solubilization of PAHs although it did not improve the biodegradation rate of PAHs (Mulder et al., 1998), indicating that further research is required in order to develop a feasible and efficient remediation method. Enhancing the extent of PAHs mass transfer from the soil phase to the liquid might prove an efficient and environmentally low-risk alternative way of addressing the problem of slow PAH biodegradation in soil.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[EN]This paper presents our research about nucleation and its dependency with external conditions, as well as the internal characteristics of the solution itself. Among the research lines of our group, we has been studying the influence of electric fields over two different but related compounds: Lithium-Potassium Sulfate and Lithium-Amonium Sulfate, which both of them show a variation on the nucleation ratio when an electric field is applied during the crystal growth. Moreover, in this paper will be explained a laboratory protocol to teach universitary Science students the nucleation process itself and how it depends on external applied conditions, e.g. electric fields.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[EN]This work presents the calibration and validation of an air quality finite element model applied to emissions from a thermal power plant located in Gran Canaria. The calibration is performed using genetic algorithms. To calibrate and validate the model, the authors use empirical measures of pollutants concentrations from 4 stations located nearby the power plant; an hourly record per station during 3 days is available. Measures from 3 stations will be used to calibrate, while validation will use measures from the remaining station…

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[EN]This work presents the calibration and validation of an air quality finite element model applied to the surroundings of Jinamar electric power plant in Gran Canaria island (Spain). The model involves the generation of an adaptive tetrahedral mesh, the computation of an ambient wind field, the inclusion of the plume rise effect in the wind field, and the simulation of transport and reaction of pollutants. The main advantage of the model is the treatment of complex terrains that introduces an alternative to the standard implementation of current models. In addition, it improves the computational cost through the use of unstructured meshes...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The term "Brain Imaging" identi�es a set of techniques to analyze the structure and/or functional behavior of the brain in normal and/or pathological situations. These techniques are largely used in the study of brain activity. In addition to clinical usage, analysis of brain activity is gaining popularity in others recent �fields, i.e. Brain Computer Interfaces (BCI) and the study of cognitive processes. In this context, usage of classical solutions (e.g. f MRI, PET-CT) could be unfeasible, due to their low temporal resolution, high cost and limited portability. For these reasons alternative low cost techniques are object of research, typically based on simple recording hardware and on intensive data elaboration process. Typical examples are ElectroEncephaloGraphy (EEG) and Electrical Impedance Tomography (EIT), where electric potential at the patient's scalp is recorded by high impedance electrodes. In EEG potentials are directly generated from neuronal activity, while in EIT by the injection of small currents at the scalp. To retrieve meaningful insights on brain activity from measurements, EIT and EEG relies on detailed knowledge of the underlying electrical properties of the body. This is obtained from numerical models of the electric �field distribution therein. The inhomogeneous and anisotropic electric properties of human tissues make accurate modeling and simulation very challenging, leading to a tradeo�ff between physical accuracy and technical feasibility, which currently severely limits the capabilities of these techniques. Moreover elaboration of data recorded requires usage of regularization techniques computationally intensive, which influences the application with heavy temporal constraints (such as BCI). This work focuses on the parallel implementation of a work-flow for EEG and EIT data processing. The resulting software is accelerated using multi-core GPUs, in order to provide solution in reasonable times and address requirements of real-time BCI systems, without over-simplifying the complexity and accuracy of the head models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In un quadro internazionale di forte interesse verso uno sviluppo sostenibile e sfide energetiche per il futuro, il DIEM, in collaborazione con altri istituti di ricerca ed imprese private, sta progettando l’integrazione di componentistica avanzata su di una caldaia alimentata a biomasse. Lo scopo finale è quello di realizzare una caldaia a biomasse che produca energia in maniera più efficiente e con un impatto ambientale ridotto. L’applicazione è indirizzata inizialmente verso caldaie di piccola-media taglia (fino a 350 kW termici) vista la larga diffusione di questa tipologia di impianto. La componentistica in oggetto è: - filtro sperimentale ad alta efficienza per la rimozione del particolato; - celle a effetto Seebeck per la produzione di energia elettrica direttamente da energia termica senza parti meccaniche in movimento; - pompa Ogden per la produzione di energia meccanica direttamente da energia termica; La finalità dell’attività di ricerca è la progettazione dell’integrazione dei suddetti dispositivi con una caldaia a biomassa da 290 kW termici per la realizzazione di un prototipo di caldaia stand-alone ad impatto ambientale ridotto: in particolare, la caldaia è in grado, una volta raggiunte le condizioni di regime, di autoalimentare le proprie utenze elettriche, garantendo il funzionamento in sicurezza in caso di black-out o consentendo l’installazione della caldaia medesima in zone remote e prive di allaccio alla rete elettrica. Inoltre, la caldaia può fornire, tramite l'utilizzo di una pompa a vapore o pompa Ogden, energia meccanica per il pompaggio di fluidi: tale opportunità si ritiene particolarmente interessante per l'integrazione della caldaia nel caso di installazione in ambito agricolo. Infine, l'abbinamento di un filtro ad alta efficienza e basso costo consente l'abbattimento delle emissioni inquinanti, favorendo una maggiore diffusione della tecnologia senza ulteriori impatti sull'ambiente.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work the numerical coupling of thermal and electric network models with model equations for optoelectronic semiconductor devices is presented. Modified nodal analysis (MNA) is applied to model electric networks. Thermal effects are modeled by an accompanying thermal network. Semiconductor devices are modeled by the energy-transport model, that allows for thermal effects. The energy-transport model is expandend to a model for optoelectronic semiconductor devices. The temperature of the crystal lattice of the semiconductor devices is modeled by the heat flow eqaution. The corresponding heat source term is derived under thermodynamical and phenomenological considerations of energy fluxes. The energy-transport model is coupled directly into the network equations and the heat flow equation for the lattice temperature is coupled directly into the accompanying thermal network. The coupled thermal-electric network-device model results in a system of partial differential-algebraic equations (PDAE). Numerical examples are presented for the coupling of network- and one-dimensional semiconductor equations. Hybridized mixed finite elements are applied for the space discretization of the semiconductor equations. Backward difference formluas are applied for time discretization. Thus, positivity of charge carrier densities and continuity of the current density is guaranteed even for the coupled model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conjugated polymers and conjugated polymer blends have attracted great interest due to their potential applications in biosensors and organic electronics. The sub-100 nm morphology of these materials is known to heavily influence their electromechanical properties and the performance of devices they are part of. Electromechanical properties include charge injection, transport, recombination, and trapping, the phase behavior and the mechanical robustness of polymers and blends. Electrical scanning probe microscopy techniques are ideal tools to measure simultaneously electric (conductivity and surface potential) and dielectric (dielectric constant) properties, surface morphology, and mechanical properties of thin films of conjugated polymers and their blends.rnIn this thesis, I first present a combined topography, Kelvin probe force microscopy (KPFM), and scanning conductive torsion mode microscopy (SCTMM) study on a gold/polystyrene model system. This system is a mimic for conjugated polymer blends where conductive domains (gold nanoparticles) are embedded in a non-conductive matrix (polystyrene film), like for polypyrrole:polystyrene sulfonate (PPy:PSS), and poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) (PEDOT:PSS). I controlled the nanoscale morphology of the model by varying the distribution of gold nanoparticles in the polystyrene films. I studied the influence of different morphologies on the surface potential measured by KPFM and on the conductivity measured by SCTMM. By the knowledge I gained from analyzing the data of the model system I was able to predict the nanostructure of a homemade PPy:PSS blend.rnThe morphologic, electric, and dielectric properties of water based conjugated polymer blends, e.g. PPy:PSS or PEDOT:PSS, are known to be influenced by their water content. These properties also influence the macroscopic performance when the polymer blends are employed in a device. In the second part I therefore present an in situ humidity-dependence study on PPy:PSS films spin-coated and drop-coated on hydrophobic highly ordered pyrolytic graphite substrates by KPFM. I additionally used a particular KPFM mode that detects the second harmonic electrostatic force. With this, I obtained images of dielectric constants of samples. Upon increasing relative humidity, the surface morphology and composition of the films changed. I also observed that relative humidity affected thermally unannealed and annealed PPy:PSS films differently. rnThe conductivity of a conjugated polymer may change once it is embedded in a non-conductive matrix, like for PPy embedded in PSS. To measure the conductivity of single conjugated polymer particles, in the third part, I present a direct method based on microscopic four-point probes. I started with metal core-shell and metal bulk particles as models, and measured their conductivities. The study could be extended to measure conductivity of single PPy particles (core-shell and bulk) with a diameter of a few micrometers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Within this thesis a new double laser pulse pumping scheme for plasma-based, transient collisionally excited soft x-ray lasers (SXRL) was developed, characterized and utilized for applications. SXRL operations from ~50 up to ~200 electron volt were demonstrated applying this concept. As a central technical tool, a special Mach-Zehnder interferometer in the chirped pulse amplification (CPA) laser front-end was developed for the generation of fully controllable double-pulses to optimally pump SXRLs.rnThis Mach-Zehnder device is fully controllable and enables the creation of two CPA pulses of different pulse duration and variable energy balance with an adjustable time delay. Besides the SXRL pumping, the double-pulse configuration was applied to determine the B-integral in the CPA laser system by amplifying short pulse replica in the system, followed by an analysis in the time domain. The measurement of B-integral values in the 0.1 to 1.5 radian range, only limited by the reachable laser parameters, proved to be a promising tool to characterize nonlinear effects in the CPA laser systems.rnContributing to the issue of SXRL pumping, the double-pulse was configured to optimally produce the gain medium of the SXRL amplification. The focusing geometry of the two collinear pulses under the same grazing incidence angle on the target, significantly improved the generation of the active plasma medium. On one hand the effect was induced by the intrinsically guaranteed exact overlap of the two pulses on the target, and on the other hand by the grazing incidence pre-pulse plasma generation, which allows for a SXRL operation at higher electron densities, enabling higher gain in longer wavelength SXRLs and higher efficiency at shorter wavelength SXRLs. The observation of gain enhancement was confirmed by plasma hydrodynamic simulations.rnThe first introduction of double short-pulse single-beam grazing incidence pumping for SXRL pumping below 20 nanometer at the laser facility PHELIX in Darmstadt (Germany), resulted in a reliable operation of a nickel-like palladium SXRL at 14.7 nanometer with a pump energy threshold strongly reduced to less than 500 millijoule. With the adaptation of the concept, namely double-pulse single-beam grazing incidence pumping (DGRIP) and the transfer of this technology to the laser facility LASERIX in Palaiseau (France), improved efficiency and stability of table-top high-repetition soft x-ray lasers in the wavelength region below 20 nanometer was demonstrated. With a total pump laser energy below 1 joule the target, 2 mircojoule of nickel-like molybdenum soft x-ray laser emission at 18.9 nanometer was obtained at 10 hertz repetition rate, proving the attractiveness for high average power operation. An easy and rapid alignment procedure fulfilled the requirements for a sophisticated installation, and the highly stable output satisfied the need for a reliable strong SXRL source. The qualities of the DGRIP scheme were confirmed in an irradiation operation on user samples with over 50.000 shots corresponding to a deposited energy of ~ 50 millijoule.rnThe generation of double-pulses with high energies up to ~120 joule enabled the transfer to shorter wavelength SXRL operation at the laser facility PHELIX. The application of DGRIP proved to be a simple and efficient method for the generation of soft x-ray lasers below 10 nanometer. Nickel-like samarium soft x-ray lasing at 7.3 nanometer was achieved at a low total pump energy threshold of 36 joule, which confirmed the suitability of the applied pumping scheme. A reliable and stable SXRL operation was demonstrated, due to the single-beam pumping geometry despite the large optical apertures. The soft x-ray lasing of nickel-like samarium was an important milestone for the feasibility of applying the pumping scheme also for higher pumping pulse energies, which are necessary to obtain soft x-ray laser wavelengths in the water window. The reduction of the total pump energy below 40 joule for 7.3 nanometer short wavelength lasing now fulfilled the requirement for the installation at the high-repetition rate operation laser facility LASERIX.rn

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The international growing concern for the human exposure to magnetic fields generated by electric power lines has unavoidably led to imposing legal limits. Respecting these limits, implies being able to calculate easily and accurately the generated magnetic field also in complex configurations. Twisting of phase conductors is such a case. The consolidated exact and approximated theory regarding a single-circuit twisted three-phase power cable line has been reported along with the proposal of an innovative simplified formula obtained by means of an heuristic procedure. This formula, although being dramatically simpler, is proven to be a good approximation of the analytical formula and at the same time much more accurate than the approximated formula found in literature. The double-circuit twisted three-phase power cable line case has been studied following different approaches of increasing complexity and accuracy. In this framework, the effectiveness of the above-mentioned innovative formula is also examined. The experimental verification of the correctness of the twisted double-circuit theoretical analysis has permitted its extension to multiple-circuit twisted three-phase power cable lines. In addition, appropriate 2D and, in particularly, 3D numerical codes for simulating real existing overhead power lines for the calculation of the magnetic field in their vicinity have been created. Finally, an innovative ‘smart’ measurement and evaluation system of the magnetic field is being proposed, described and validated, which deals with the experimentally-based evaluation of the total magnetic field B generated by multiple sources in complex three-dimensional arrangements, carried out on the basis of the measurement of the three Cartesian field components and their correlation with the field currents via multilinear regression techniques. The ultimate goal is verifying that magnetic induction intensity is within the prescribed limits.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The electromagnetic form factors of the proton are fundamental quantities sensitive to the distribution of charge and magnetization inside the proton. Precise knowledge of the form factors, in particular of the charge and magnetization radii provide strong tests for theory in the non-perturbative regime of QCD. However, the existing data at Q^2 below 1 (GeV/c)^2 are not precise enough for a hard test of theoretical predictions.rnrnFor a more precise determination of the form factors, within this work more than 1400 cross sections of the reaction H(e,e′)p were measured at the Mainz Microtron MAMI using the 3-spectrometer-facility of the A1-collaboration. The data were taken in three periods in the years 2006 and 2007 using beam energies of 180, 315, 450, 585, 720 and 855 MeV. They cover the Q^2 region from 0.004 to 1 (GeV/c)^2 with counting rate uncertainties below 0.2% for most of the data points. The relative luminosity of the measurements was determined using one of the spectrometers as a luminosity monitor. The overlapping acceptances of the measurements maximize the internal redundancy of the data and allow, together with several additions to the standard experimental setup, for tight control of systematic uncertainties.rnTo account for the radiative processes, an event generator was developed and implemented in the simulation package of the analysis software which works without peaking approximation by explicitly calculating the Bethe-Heitler and Born Feynman diagrams for each event.rnTo separate the form factors and to determine the radii, the data were analyzed by fitting a wide selection of form factor models directly to the measured cross sections. These fits also determined the absolute normalization of the different data subsets. The validity of this method was tested with extensive simulations. The results were compared to an extraction via the standard Rosenbluth technique.rnrnThe dip structure in G_E that was seen in the analysis of the previous world data shows up in a modified form. When compared to the standard-dipole form factor as a smooth curve, the extracted G_E exhibits a strong change of the slope around 0.1 (GeV/c)^2, and in the magnetic form factor a dip around 0.2 (GeV/c)^2 is found. This may be taken as indications for a pion cloud. For higher Q^2, the fits yield larger values for G_M than previous measurements, in agreement with form factor ratios from recent precise polarized measurements in the Q2 region up to 0.6 (GeV/c)^2.rnrnThe charge and magnetic rms radii are determined as rn⟨r_e⟩=0.879 ± 0.005(stat.) ± 0.004(syst.) ± 0.002(model) ± 0.004(group) fm,rn⟨r_m⟩=0.777 ± 0.013(stat.) ± 0.009(syst.) ± 0.005(model) ± 0.002(group) fm.rnThis charge radius is significantly larger than theoretical predictions and than the radius of the standard dipole. However, it is in agreement with earlier results measured at the Mainz linear accelerator and with determinations from Hydrogen Lamb shift measurements. The extracted magnetic radius is smaller than previous determinations and than the standard-dipole value.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently, the surface plasmon field-enhanced fluorescence spectroscopy (SPFS) was developed as a kinetic analysis and a detection method with dual- monitoring of the change of reflectivity and fluorescence signal for the interfacial phenomenon. A fundamental study of PNA and DNA interaction at the surface using surface plasmon fluorescence spectroscopy (SPFS) will be investigated in studies. Furthermore, several specific conditions to influence on PNA/DNA hybridization and affinity efficiency by monitoring reflective index changes and fluorescence variation at the same time will be considered. In order to identify the affinity degree of PNA/DNA hybridizaiton at the surface, the association constant (kon) and the dissociation constant (koff) will be obtained by titration experiment of various concentration of target DNA and kinetic investigation. In addition, for more enhancing the hybridization efficiency of PNA/DNA, a study of polarized electric field enhancement system will be introduced and performed in detail. DNA is well-known polyelectrolytes with naturally negative charged molecules in its structure. With polarized electrical treatment, applying DC field to the metal surface, which PNA probe would be immobilized at, negatively charged DNA molecules can be attracted by electromagnetic attraction force and manipulated to the close the surface area, and have more possibility to hybridize with probe PNA molecules by hydrogen bonding each corresponding base sequence. There are several major factors can be influenced on the hybridization efficiency.