997 resultados para massive electromagnetic models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Flood disasters are a major cause of fatalities and economic losses, and several studies indicate that global flood risk is currently increasing. In order to reduce and mitigate the impact of river flood disasters, the current trend is to integrate existing structural defences with non structural measures. This calls for a wider application of advanced hydraulic models for flood hazard and risk mapping, engineering design, and flood forecasting systems. Within this framework, two different hydraulic models for large scale analysis of flood events have been developed. The two models, named CA2D and IFD-GGA, adopt an integrated approach based on the diffusive shallow water equations and a simplified finite volume scheme. The models are also designed for massive code parallelization, which has a key importance in reducing run times in large scale and high-detail applications. The two models were first applied to several numerical cases, to test the reliability and accuracy of different model versions. Then, the most effective versions were applied to different real flood events and flood scenarios. The IFD-GGA model showed serious problems that prevented further applications. On the contrary, the CA2D model proved to be fast and robust, and able to reproduce 1D and 2D flow processes in terms of water depth and velocity. In most applications the accuracy of model results was good and adequate to large scale analysis. Where complex flow processes occurred local errors were observed, due to the model approximations. However, they did not compromise the correct representation of overall flow processes. In conclusion, the CA model can be a valuable tool for the simulation of a wide range of flood event types, including lowland and flash flood events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The electric dipole response of neutron-rich nickel isotopes has been investigated using the LAND setup at GSI in Darmstadt (Germany). Relativistic secondary beams of 56−57Ni and 67−72Ni at approximately 500 AMeV have been generated using projectile fragmentation of stable ions on a 4 g/cm2 Be target and subsequent separation in the magnetic dipole fields of the FRagment Separator (FRS). After reaching the LAND setup in Cave C, the radioactive ions were excited electromagnetically in the electric field of a Pb target. The decay products have been measured in inverse kinematics using various detectors. Neutron-rich 67−69Ni isotopes decay by the emission of neutrons, which are detected in the LAND detector. The present analysis concentrates on the (gamma,n) and (gamma,2n) channels in these nuclei, since the proton and three-neutron thresholds are unlikely to be reached considering the virtual photon spectrum for nickel ions at 500 AMeV. A measurement of the stable 58Ni isotope is used as a benchmark to check the accuracy of the present results with previously published data. The measured (gamma,n) and (gamma,np) channels are compared with an inclusive photoneutron measurement by Fultz and coworkers, which are consistent within the respective errors. The measured excitation energy distributions of 67−69Ni contain a large portion of the Giant Dipole Resonance (GDR) strength predicted by the Thomas-Reiche-Kuhn energy-weighted sum rule, as well as a significant amount of low-lying E1 strength, that cannot be attributed to the GDR alone. The GDR distribution parameters are calculated using well-established semi-empirical systematic models, providing the peak energies and widths. The GDR strength is extracted from the chi-square minimization of the model GDR to the measured data of the (gamma,2n) channel, thereby excluding any influence of eventual low-lying strength. The subtraction of the obtained GDR distribution from the total measured E1 strength provides the low-lying E1 strength distribution, which is attributed to the Pygmy Dipole Resonance (PDR). The extraction of the peak energy, width and strength is performed using a Gaussian function. The minimization of trial Gaussian distributions to the data does not converge towards a sharp minimum. Therefore, the results are presented by a chi-square distribution as a function of all three Gaussian parameters. Various predictions of PDR distributions exist, as well as a recent measurement of the 68Ni pygmy dipole-resonance obtained by virtual photon scattering, to which the present pygmy dipole-resonance distribution is also compared.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il fenomeno dello scattering diffuso è stato oggetto di numerosi studi nell’arco degli ultimi anni, questo grazie alla sua rilevanza nell’ambito della propagazione elettromagnetica così come in molti altri campi di applicazione (remote sensing, ottica, fisica, etc.), ma la compresione completa di questo effetto è lungi dall’essere raggiunta. Infatti la complessità nello studio e nella caratterizzazione della diffusione deriva dalla miriade di casistiche ed effetti che si possono incontrare in un ambiente di propagazione reale, lasciando intuire la necessità di trattarne probabilisticamente il relativo contributo. Da qui nasce l’esigenza di avere applicazioni efficienti dal punto di vista ingegneristico che coniughino la definizione rigorosa del fenomeno e la conseguente semplificazione per fini pratici. In tale visione possiamo descrivere lo scattering diffuso come la sovrapposizione di tutti quegli effetti che si scostano dalle classiche leggi dell’ottica geometrica (riflessione, rifrazione e diffrazione) che generano contributi del campo anche in punti dello spazio e direzioni in cui teoricamente, per oggetti lisci ed omogenei, non dovrebbe esserci alcun apporto. Dunque l’effetto principale, nel caso di ambiente di propagazione reale, è la diversa distribuzione spaziale del campo rispetto al caso teorico di superficie liscia ed omogenea in congiunzione ad effetti di depolarizzazione e redistribuzione di energia nel bilancio di potenza. Perciò la complessità del fenomeno è evidente e l’obiettivo di tale elaborato è di proporre nuovi risultati che permettano di meglio descrivere lo scattering diffuso ed individuare altresì le tematiche sulle quali concentrare l’attenzione nei lavori futuri. In principio è stato quindi effettuato uno studio bibliografico così da identificare i modelli e le teorie esistenti individuando i punti sui quali riflettere maggiormente; nel contempo si sono analizzate le metodologie di caratterizzazione della permittività elettrica complessa dei materiali, questo per valutare la possibilità di ricavare i parametri da utilizzare nelle simulazioni utilizzando il medesimo setup di misura ideato per lo studio della diffusione. Successivamente si è realizzato un setup di simulazione grazie ad un software di calcolo elettromagnetico (basato sul metodo delle differenze finite nel dominio del tempo) grazie al quale è stato possibile analizzare la dispersione tridimensionale dovuta alle irregolarità del materiale. Infine è stata condotta una campagna di misure in camera anecoica con un banco sperimentale realizzato ad-hoc per effettuare una caratterizzazione del fenomeno di scattering in banda larga.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The primary goals of this study were to develop a cell-free in vitro assay for the assessment of nonthermal electromagnetic (EMF) bioeffects and to develop theoretical models in accord with current experimental observations. Based upon the hypothesis that EMF effects operate by modulating Ca2+/CaM binding, an in vitro nitric oxide (NO) synthesis assay was developed to assess the effects of a pulsed radiofrequency (PRF) signal used for treatment of postoperative pain and edema. No effects of PRF on NO synthesis were observed. Effects of PRF on Ca2+/CaM binding were also assessed using a Ca2+-selective electrode, also yielding no EMF Ca2+/CaM binding. However, a PRF effect was observed on the interaction of hemoglobin (Hb) with tetrahydrobiopterin, leading to the development of an in vitro Hb deoxygenation assay, showing a reduction in the rate of Hb deoxygenation for exposures to both PRF and a static magnetic field (SMF). Structural studies using pyranine fluorescence, Gd3+ vibronic sideband luminescence and attenuated total reflectance Fourier transform infrared (ATR-FTIR) spectroscopy were conducted in order to ascertain the mechanism of this EMF effect on Hb. Also, the effect of SMF on Hb oxygen saturation (SO2) was assessed under gas-controlled conditions. These studies showed no definitive changes in protein/solvation structure or SO2 under equilibrium conditions, suggesting the need for real-time instrumentation or other means of observing out-of-equilibrium Hb dynamics. Theoretical models were developed for EMF transduction, effects on ion binding, neuronal spike timing, and dynamics of Hb deoxygenation. The EMF sensitivity and simplicity of the Hb deoxygenation assay suggest a new tool to further establish basic biophysical EMF transduction mechanisms. If an EMF-induced increase in the rate of deoxygenation can be demonstrated in vivo, then enhancement of oxygen delivery may be a new therapeutic method by which clinically relevant EMF-mediated enhancement of growth and repair processes can occur.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several countries have acquired, over the past decades, large amounts of area covering Airborne Electromagnetic data. Contribution of airborne geophysics has dramatically increased for both groundwater resource mapping and management proving how those systems are appropriate for large-scale and efficient groundwater surveying. We start with processing and inversion of two AEM dataset from two different systems collected over the Spiritwood Valley Aquifer area, Manitoba, Canada respectively, the AeroTEM III (commissioned by the Geological Survey of Canada in 2010) and the “Full waveform VTEM” dataset, collected and tested over the same survey area, during the fall 2011. We demonstrate that in the presence of multiple datasets, either AEM and ground data, due processing, inversion, post-processing, data integration and data calibration is the proper approach capable of providing reliable and consistent resistivity models. Our approach can be of interest to many end users, ranging from Geological Surveys, Universities to Private Companies, which are often proprietary of large geophysical databases to be interpreted for geological and\or hydrogeological purposes. In this study we deeply investigate the role of integration of several complimentary types of geophysical data collected over the same survey area. We show that data integration can improve inversions, reduce ambiguity and deliver high resolution results. We further attempt to use the final, most reliable output resistivity models as a solid basis for building a knowledge-driven 3D geological voxel-based model. A voxel approach allows a quantitative understanding of the hydrogeological setting of the area, and it can be further used to estimate the aquifers volumes (i.e. potential amount of groundwater resources) as well as hydrogeological flow model prediction. In addition, we investigated the impact of an AEM dataset towards hydrogeological mapping and 3D hydrogeological modeling, comparing it to having only a ground based TEM dataset and\or to having only boreholes data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with three different physical models, where each model involves a random component which is linked to a cubic lattice. First, a model is studied, which is used in numerical calculations of Quantum Chromodynamics.In these calculations random gauge-fields are distributed on the bonds of the lattice. The formulation of the model is fitted into the mathematical framework of ergodic operator families. We prove, that for small coupling constants, the ergodicity of the underlying probability measure is indeed ensured and that the integrated density of states of the Wilson-Dirac operator exists. The physical situations treated in the next two chapters are more similar to one another. In both cases the principle idea is to study a fermion system in a cubic crystal with impurities, that are modeled by a random potential located at the lattice sites. In the second model we apply the Hartree-Fock approximation to such a system. For the case of reduced Hartree-Fock theory at positive temperatures and a fixed chemical potential we consider the limit of an infinite system. In that case we show the existence and uniqueness of minimizers of the Hartree-Fock functional. In the third model we formulate the fermion system algebraically via C*-algebras. The question imposed here is to calculate the heat production of the system under the influence of an outer electromagnetic field. We show that the heat production corresponds exactly to what is empirically predicted by Joule's law in the regime of linear response.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prediction of radiated fields from transmission lines has not previously been studied from a panoptical power system perspective. The application of BPL technologies to overhead transmission lines would benefit greatly from an ability to simulate real power system environments, not limited to the transmission lines themselves. Presently circuitbased transmission line models used by EMTP-type programs utilize Carson’s formula for a waveguide parallel to an interface. This formula is not valid for calculations at high frequencies, considering effects of earth return currents. This thesis explains the challenges of developing such improved models, explores an approach to combining circuit-based and electromagnetics modeling to predict radiated fields from transmission lines, exposes inadequacies of simulation tools, and suggests methods of extending the validity of transmission line models into very high frequency ranges. Electromagnetics programs are commonly used to study radiated fields from transmission lines. However, an approach is proposed here which is also able to incorporate the components of a power system through the combined use of EMTP-type models. Carson’s formulas address the series impedance of electrical conductors above and parallel to the earth. These equations have been analyzed to show their inherent assumptions and what the implications are. Additionally, the lack of validity into higher frequencies has been demonstrated, showing the need to replace Carson’s formulas for these types of studies. This body of work leads to several conclusions about the relatively new study of BPL. Foremost, there is a gap in modeling capabilities which has been bridged through integration of circuit-based and electromagnetics modeling, allowing more realistic prediction of BPL performance and radiated fields. The proposed approach is limited in its scope of validity due to the formulas used by EMTP-type software. To extend the range of validity, a new set of equations must be identified and implemented in the approach. Several potential methods of implementation have been explored. Though an appropriate set of equations has not yet been identified, further research in this area will benefit from a clear depiction of the next important steps and how they can be accomplished. Prediction of radiated fields from transmission lines has not previously been studied from a panoptical power system perspective. The application of BPL technologies to overhead transmission lines would benefit greatly from an ability to simulate real power system environments, not limited to the transmission lines themselves. Presently circuitbased transmission line models used by EMTP-type programs utilize Carson’s formula for a waveguide parallel to an interface. This formula is not valid for calculations at high frequencies, considering effects of earth return currents. This thesis explains the challenges of developing such improved models, explores an approach to combining circuit-based and electromagnetics modeling to predict radiated fields from transmission lines, exposes inadequacies of simulation tools, and suggests methods of extending the validity of transmission line models into very high frequency ranges. Electromagnetics programs are commonly used to study radiated fields from transmission lines. However, an approach is proposed here which is also able to incorporate the components of a power system through the combined use of EMTP-type models. Carson’s formulas address the series impedance of electrical conductors above and parallel to the earth. These equations have been analyzed to show their inherent assumptions and what the implications are. Additionally, the lack of validity into higher frequencies has been demonstrated, showing the need to replace Carson’s formulas for these types of studies. This body of work leads to several conclusions about the relatively new study of BPL. Foremost, there is a gap in modeling capabilities which has been bridged through integration of circuit-based and electromagnetics modeling, allowing more realistic prediction of BPL performance and radiated fields. The proposed approach is limited in its scope of validity due to the formulas used by EMTP-type software. To extend the range of validity, a new set of equations must be identified and implemented in the approach. Several potential methods of implementation have been explored. Though an appropriate set of equations has not yet been identified, further research in this area will benefit from a clear depiction of the next important steps and how they can be accomplished.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Open collaborative projects are moving to the foreground of knowledge production. Some online user communities develop into longterm projects that generate a highly valuable and at the same time freely accessible output. Traditional copyright law that is organized around the idea of a single creative entity is not well equipped to accommodate the needs of these forms of collaboration. In order to enable a peculiar network-type of interaction participants instead draw on public licensing models that determine the freedoms to use individual contributions. With the help of these access rules the operational logic of the project can be implemented successfully. However, as the case of the Wikipedia GFDL-CC license transition demonstrates, the adaptation of access rules in networks to new circumstances raises collective action problems and suffers from pitfalls caused by the fact that public licensing is grounded in individual copyright. Legal governance of open collaboration projects is a largely unexplored field. The article argues that the license steward of a public license assumes the position of a fiduciary of the knowledge commons generated under the license regime. Ultimately, the governance of decentralized networks translates into a composite of organizational and contractual elements. It is concluded that the production of global knowledge commons relies on rules of transnational private law.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context. Planet formation models have been developed during the past years to try to reproduce what has been observed of both the solar system and the extrasolar planets. Some of these models have partially succeeded, but they focus on massive planets and, for the sake of simplicity, exclude planets belonging to planetary systems. However, more and more planets are now found in planetary systems. This tendency, which is a result of radial velocity, transit, and direct imaging surveys, seems to be even more pronounced for low-mass planets. These new observations require improving planet formation models, including new physics, and considering the formation of systems. Aims: In a recent series of papers, we have presented some improvements in the physics of our models, focussing in particular on the internal structure of forming planets, and on the computation of the excitation state of planetesimals and their resulting accretion rate. In this paper, we focus on the concurrent effect of the formation of more than one planet in the same protoplanetary disc and show the effect, in terms of architecture and composition of this multiplicity. Methods: We used an N-body calculation including collision detection to compute the orbital evolution of a planetary system. Moreover, we describe the effect of competition for accretion of gas and solids, as well as the effect of gravitational interactions between planets. Results: We show that the masses and semi-major axes of planets are modified by both the effect of competition and gravitational interactions. We also present the effect of the assumed number of forming planets in the same system (a free parameter of the model), as well as the effect of the inclination and eccentricity damping. We find that the fraction of ejected planets increases from nearly 0 to 8% as we change the number of embryos we seed the system with from 2 to 20 planetary embryos. Moreover, our calculations show that, when considering planets more massive than ~5 M⊕, simulations with 10 or 20 planetary embryos statistically give the same results in terms of mass function and period distribution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context. According to the sequential accretion model (or core-nucleated accretion model), giant planet formation is based first on the formation of a solid core which, when massive enough, can gravitationally bind gas from the nebula to form the envelope. The most critical part of the model is the formation time of the core: to trigger the accretion of gas, the core has to grow up to several Earth masses before the gas component of the protoplanetary disc dissipates. Aims: We calculate planetary formation models including a detailed description of the dynamics of the planetesimal disc, taking into account both gas drag and excitation of forming planets. Methods: We computed the formation of planets, considering the oligarchic regime for the growth of the solid core. Embryos growing in the disc stir their neighbour planetesimals, exciting their relative velocities, which makes accretion more difficult. Here we introduce a more realistic treatment for the evolution of planetesimals' relative velocities, which directly impact on the formation timescale. For this, we computed the excitation state of planetesimals, as a result of stirring by forming planets, and gas-solid interactions. Results: We find that the formation of giant planets is favoured by the accretion of small planetesimals, as their random velocities are more easily damped by the gas drag of the nebula. Moreover, the capture radius of a protoplanet with a (tiny) envelope is also larger for small planetesimals. However, planets migrate as a result of disc-planet angular momentum exchange, with important consequences for their survival: due to the slow growth of a protoplanet in the oligarchic regime, rapid inward type I migration has important implications on intermediate-mass planets that have not yet started their runaway accretion phase of gas. Most of these planets are lost in the central star. Surviving planets have masses either below 10 M⊕ or above several Jupiter masses. Conclusions: To form giant planets before the dissipation of the disc, small planetesimals (~0.1 km) have to be the major contributors of the solid accretion process. However, the combination of oligarchic growth and fast inward migration leads to the absence of intermediate-mass planets. Other processes must therefore be at work to explain the population of extrasolar planets that are presently known.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The existence of an association between leukemia and electromagnetic fields (EMF) is still controversial. The results of epidemiologic studies of leukemia in occupational groups with exposure to EMF are inconsistent. Weak associations have been seen in a few studies. EMF assessment is lacking in precision. Reported dose-response relationships have been based on qualitative levels of exposure to EMF without regard to duration of employment or EMF intensity on the jobs. Furthermore, potential confounding factors in the associations were not often well controlled. The current study is an analysis of the data collected from an incident case-control study. The primary objective was to test the hypothesis that occupational exposure to EMF is associated with leukemia, including total leukemia (TL), myelogenous leukemia (MYELOG) and acute non-lymphoid leukemia (ANLL). Potential confounding factors: occupational exposure to benzene, age, smoking, alcohol consumption, and previous medical radiation exposures were controlled in multivariate logistic regression models. Dose-response relationships were estimated by cumulative occupational exposure to EMF, taking into account duration of employment and EMF intensity on the jobs. In order to overcome weaknesses of most previous studies, special efforts were made to improve the precision of EMF assessment. Two definitions of EMF were used and result discrepancies using the two definitions were observed. These difference raised a question as to whether the workers at jobs with low EMF exposure should be considered as non-exposed in future studies. In addition, the current study suggested use of lifetime cumulative EMF exposure estimates to determine dose-response relationship. The analyses of the current study suggest an association between ANLL and employment at selected jobs with high EMF exposure. The existence of an association between three types of leukemia and broader categories of occupational EMF exposure, is still undetermined. If an association does exist between occupational EMF exposure and leukemia, the results of the current study suggest that EMF might only be a potential factor in the promotion of leukemia, but not its initiation. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Maximizing data quality may be especially difficult in trauma-related clinical research. Strategies are needed to improve data quality and assess the impact of data quality on clinical predictive models. This study had two objectives. The first was to compare missing data between two multi-center trauma transfusion studies: a retrospective study (RS) using medical chart data with minimal data quality review and the PRospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study with standardized quality assurance. The second objective was to assess the impact of missing data on clinical prediction algorithms by evaluating blood transfusion prediction models using PROMMTT data. RS (2005-06) and PROMMTT (2009-10) investigated trauma patients receiving ≥ 1 unit of red blood cells (RBC) from ten Level I trauma centers. Missing data were compared for 33 variables collected in both studies using mixed effects logistic regression (including random intercepts for study site). Massive transfusion (MT) patients received ≥ 10 RBC units within 24h of admission. Correct classification percentages for three MT prediction models were evaluated using complete case analysis and multiple imputation based on the multivariate normal distribution. A sensitivity analysis for missing data was conducted to estimate the upper and lower bounds of correct classification using assumptions about missing data under best and worst case scenarios. Most variables (17/33=52%) had <1% missing data in RS and PROMMTT. Of the remaining variables, 50% demonstrated less missingness in PROMMTT, 25% had less missingness in RS, and 25% were similar between studies. Missing percentages for MT prediction variables in PROMMTT ranged from 2.2% (heart rate) to 45% (respiratory rate). For variables missing >1%, study site was associated with missingness (all p≤0.021). Survival time predicted missingness for 50% of RS and 60% of PROMMTT variables. MT models complete case proportions ranged from 41% to 88%. Complete case analysis and multiple imputation demonstrated similar correct classification results. Sensitivity analysis upper-lower bound ranges for the three MT models were 59-63%, 36-46%, and 46-58%. Prospective collection of ten-fold more variables with data quality assurance reduced overall missing data. Study site and patient survival were associated with missingness, suggesting that data were not missing completely at random, and complete case analysis may lead to biased results. Evaluating clinical prediction model accuracy may be misleading in the presence of missing data, especially with many predictor variables. The proposed sensitivity analysis estimating correct classification under upper (best case scenario)/lower (worst case scenario) bounds may be more informative than multiple imputation, which provided results similar to complete case analysis.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mutations in the amyloid precursor protein (APP) gene cause early-onset familial Alzheimer disease (AD) by affecting the formation of the amyloid β (Aβ) peptide, the major constituent of AD plaques. We expressed human APP751 containing these mutations in the brains of transgenic mice. Two transgenic mouse lines develop pathological features reminiscent of AD. The degree of pathology depends on expression levels and specific mutations. A 2-fold overexpression of human APP with the Swedish double mutation at positions 670/671 combined with the V717I mutation causes Aβ deposition in neocortex and hippocampus of 18-month-old transgenic mice. The deposits are mostly of the diffuse type; however, some congophilic plaques can be detected. In mice with 7-fold overexpression of human APP harboring the Swedish mutation alone, typical plaques appear at 6 months, which increase with age and are Congo Red-positive at first detection. These congophilic plaques are accompanied by neuritic changes and dystrophic cholinergic fibers. Furthermore, inflammatory processes indicated by a massive glial reaction are apparent. Most notably, plaques are immunoreactive for hyperphosphorylated tau, reminiscent of early tau pathology. The immunoreactivity is exclusively found in congophilic senile plaques of both lines. In the higher expressing line, elevated tau phosphorylation can be demonstrated biochemically in 6-month-old animals and increases with age. These mice resemble major features of AD pathology and suggest a central role of Aβ in the pathogenesis of the disease.