931 resultados para Space-time codes (STCs)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Clay mineral-rich sedimentary formations are currently under investigation to evaluate their potential use as host formations for installation of deep underground disposal facilities for radioactive waste (e.g. Boom Clay (BE), Opalinus Clay (CH), Callovo-Oxfordian argillite (FR)). The ultimate safety of the corresponding repository concepts depends largely on the capacity of the host formation to limit the flux towards the biosphere of radionuclides (RN) contained in the waste to acceptably low levels. Data for diffusion-driven transfer in these formations shows extreme differences in the measured or modelled behaviour for various radionuclides, e. g. between halogen RN (Cl-36, I-129) and actinides (U-238,U-235, Np-237, Th-232, etc.), which result from major differences between RN of the effects on transport of two phenomena: diffusion and sorption. This paper describes recent research aimed at improving understanding of these two phenomena, focusing on the results of studies carried out during the EC Funmig IP on clayrocks from the above three formations and from the Boda formation (HU). Project results regarding phenomena governing water, cation and anion distribution and mobility in the pore volumes influenced by the negatively-charged surfaces of clay minerals show a convergence of the modelling results for behaviour at the molecular scale and descriptions based on electrical double layer models. Transport models exist which couple ion distribution relative to the clay-solution interface and differentiated diffusive characteristics. These codes are able to reproduce the main trends in behaviour observed experimentally, e.g. D-e(anion) < D-e(HTO) < D-e(cation) and D-e(anion) variations as a function of ionic strength and material density. These trends are also well-explained by models of transport through ideal porous matrices made up of a charged surface material. Experimental validation of these models is good as regards monovalent alkaline cations, in progress for divalent electrostatically-interacting cations (e.g. Sr2+) and still relatively poor for 'strongly sorbing', high K-d cations. Funmig results have clarified understanding of how clayrock mineral composition, and the corresponding organisation of mineral grain assemblages and their associated porosity, can affect mobile solute (anions, HTO) diffusion at different scales (mm to geological formation). In particular, advances made in the capacity to map clayrock mineral grain-porosity organisation at high resolution provide additional elements for understanding diffusion anisotropy and for relating diffusion characteristics measured at different scales. On the other hand, the results of studies focusing on evaluating the potential effects of heterogeneity on mobile species diffusion at the formation scale tend to show that there is a minimal effect when compared to a homogeneous property model. Finally, the results of a natural tracer-based study carried out on the Opalinus Clay formation increase confidence in the use of diffusion parameters measured on laboratory scale samples for predicting diffusion over geological time-space scales. Much effort was placed on improving understanding of coupled sorption-diffusion phenomena for sorbing cations in clayrocks. Results regarding sorption equilibrium in dispersed and compacted materials for weakly to moderately sorbing cations (Sr2+, Cs+, Co2+) tend to show that the same sorption model probably holds in both systems. It was not possible to demonstrate this for highly sorbing elements such as Eu(III) because of the extremely long times needed to reach equilibrium conditions, but there does not seem to be any clear reason why such elements should not have similar behaviour. Diffusion experiments carried out with Sr2+, Cs+ and Eu(III) on all of the clayrocks gave mixed results and tend to show that coupled diffusion-sorption migration is much more complex than expected, leading generally to greater mobility than that predicted by coupling a batch-determined K-d and Ficks law based on the diffusion behaviour of HTO. If the K-d measured on equivalent dispersed systems holds as was shown to be the case for Sr, Cs (and probably Co) for Opalinus Clay, these results indicate that these cations have a D-e value higher than HTO (up to a factor of 10 for Cs+). Results are as yet very limited for very moderate to strongly sorbing species (e.g. Co(II), Eu(III), Cu(II)) because of their very slow transfer characteristics. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stem cell regeneration of damaged tissue has recently been reported in many different organs. Since the loss of retinal pigment epithelium (RPE) in the eye is associated with a major cause of visual loss - specifically, age-related macular degeneration - we investigated whether hematopoietic stem cells (HSC) given systemically can home to the damaged subretinal space and express markers of RPE lineage. Green fluorescent protein (GFP) cells of bone marrow origin were used in a sodium iodate (NaIO(3)) model of RPE damage in the mouse. The optimal time for adoptive transfer of bone marrow-derived stem cells relative to the time of injury and the optimal cell type [whole bone marrow, mobilized peripheral blood, HSC, facilitating cells (FC)] were determined by counting the number of GFP(+) cells in whole eye flat mounts. Immunocytochemistry was performed to identify the bone marrow origin of the cells in the RPE using antibodies for CD45, Sca-1, and c-kit, as well as the expression of the RPE-specific marker, RPE-65. The time at which bone marrow-derived cells were adoptively transferred relative to the time of NaIO(3) injection did not significantly influence the number of cells that homed to the subretinal space. At both one and two weeks after intravenous (i.v.) injection, GFP(+) cells of bone marrow origin were observed in the damaged subretinal space, at sites of RPE loss, but not in the normal subretinal space. The combined transplantation of HSC+FC cells appeared to favor the survival of the homed stem cells at two weeks, and RPE-65 was expressed by adoptively transferred HSC by four weeks. We have shown that systemically injected HSC homed to the subretinal space in the presence of RPE damage and that FC promoted survival of these cells. Furthermore, the RPE-specific marker RPE-65 was expressed on adoptively transferred HSC in the denuded areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Boston Harbor has had a history of poor water quality, including contamination by enteric pathogens. We conduct a statistical analysis of data collected by the Massachusetts Water Resources Authority (MWRA) between 1996 and 2002 to evaluate the effects of court-mandated improvements in sewage treatment. Motivated by the ineffectiveness of standard Poisson mixture models and their zero-inflated counterparts, we propose a new negative binomial model for time series of Enterococcus counts in Boston Harbor, where nonstationarity and autocorrelation are modeled using a nonparametric smooth function of time in the predictor. Without further restrictions, this function is not identifiable in the presence of time-dependent covariates; consequently we use a basis orthogonal to the space spanned by the covariates and use penalized quasi-likelihood (PQL) for estimation. We conclude that Enterococcus counts were greatly reduced near the Nut Island Treatment Plant (NITP) outfalls following the transfer of wastewaters from NITP to the Deer Island Treatment Plant (DITP) and that the transfer of wastewaters from Boston Harbor to the offshore diffusers in Massachusetts Bay reduced the Enterococcus counts near the DITP outfalls.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In environmental epidemiology, exposure X and health outcome Y vary in space and time. We present a method to diagnose the possible influence of unmeasured confounders U on the estimated effect of X on Y and to propose several approaches to robust estimation. The idea is to use space and time as proxy measures for the unmeasured factors U. We start with the time series case where X and Y are continuous variables at equally-spaced times and assume a linear model. We define matching estimator b(u)s that correspond to pairs of observations with specific lag u. Controlling for a smooth function of time, St, using a kernel estimator is roughly equivalent to estimating the association with a linear combination of the b(u)s with weights that involve two components: the assumptions about the smoothness of St and the normalized variogram of the X process. When an unmeasured confounder U exists, but the model otherwise correctly controls for measured confounders, the excess variation in b(u)s is evidence of confounding by U. We use the plot of b(u)s versus lag u, lagged-estimator-plot (LEP), to diagnose the influence of U on the effect of X on Y. We use appropriate linear combination of b(u)s or extrapolate to b(0) to obtain novel estimators that are more robust to the influence of smooth U. The methods are extended to time series log-linear models and to spatial analyses. The LEP plot gives us a direct view of the magnitude of the estimators for each lag u and provides evidence when models did not adequately describe the data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an overview of different methods for decomposing a multichannel spontaneous electroencephalogram (EEG) into sets of temporal patterns and topographic distributions. All of the methods presented here consider the scalp electric field as the basic analysis entity in space. In time, the resolution of the methods is between milliseconds (time-domain analysis), subseconds (time- and frequency-domain analysis) and seconds (frequency-domain analysis). For any of these methods, we show that large parts of the data can be explained by a small number of topographic distributions. Physically, this implies that the brain regions that generated one of those topographies must have been active with a common phase. If several brain regions are producing EEG signals at the same time and frequency, they have a strong tendency to do this in a synchronized mode. This view is illustrated by several examples (including combined EEG and functional magnetic resonance imaging (fMRI)) and a selective review of the literature. The findings are discussed in terms of short-lasting binding between different brain regions through synchronized oscillations, which could constitute a mechanism to form transient, functional neurocognitive networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Currently photon Monte Carlo treatment planning (MCTP) for a patient stored in the patient database of a treatment planning system (TPS) can usually only be performed using a cumbersome multi-step procedure where many user interactions are needed. This means automation is needed for usage in clinical routine. In addition, because of the long computing time in MCTP, optimization of the MC calculations is essential. For these purposes a new graphical user interface (GUI)-based photon MC environment has been developed resulting in a very flexible framework. By this means appropriate MC transport methods are assigned to different geometric regions by still benefiting from the features included in the TPS. In order to provide a flexible MC environment, the MC particle transport has been divided into different parts: the source, beam modifiers and the patient. The source part includes the phase-space source, source models and full MC transport through the treatment head. The beam modifier part consists of one module for each beam modifier. To simulate the radiation transport through each individual beam modifier, one out of three full MC transport codes can be selected independently. Additionally, for each beam modifier a simple or an exact geometry can be chosen. Thereby, different complexity levels of radiation transport are applied during the simulation. For the patient dose calculation, two different MC codes are available. A special plug-in in Eclipse providing all necessary information by means of Dicom streams was used to start the developed MC GUI. The implementation of this framework separates the MC transport from the geometry and the modules pass the particles in memory; hence, no files are used as the interface. The implementation is realized for 6 and 15 MV beams of a Varian Clinac 2300 C/D. Several applications demonstrate the usefulness of the framework. Apart from applications dealing with the beam modifiers, two patient cases are shown. Thereby, comparisons are performed between MC calculated dose distributions and those calculated by a pencil beam or the AAA algorithm. Interfacing this flexible and efficient MC environment with Eclipse allows a widespread use for all kinds of investigations from timing and benchmarking studies to clinical patient studies. Additionally, it is possible to add modules keeping the system highly flexible and efficient.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BEAMnrc, a code for simulating medical linear accelerators based on EGSnrc, has been bench-marked and used extensively in the scientific literature and is therefore often considered to be the gold standard for Monte Carlo simulations for radiotherapy applications. However, its long computation times make it too slow for the clinical routine and often even for research purposes without a large investment in computing resources. VMC++ is a much faster code thanks to the intensive use of variance reduction techniques and a much faster implementation of the condensed history technique for charged particle transport. A research version of this code is also capable of simulating the full head of linear accelerators operated in photon mode (excluding multileaf collimators, hard and dynamic wedges). In this work, a validation of the full head simulation at 6 and 18 MV is performed, simulating with VMC++ and BEAMnrc the addition of one head component at a time and comparing the resulting phase space files. For the comparison, photon and electron fluence, photon energy fluence, mean energy, and photon spectra are considered. The largest absolute differences are found in the energy fluences. For all the simulations of the different head components, a very good agreement (differences in energy fluences between VMC++ and BEAMnrc <1%) is obtained. Only a particular case at 6 MV shows a somewhat larger energy fluence difference of 1.4%. Dosimetrically, these phase space differences imply an agreement between both codes at the <1% level, making VMC++ head module suitable for full head simulations with considerable gain in efficiency and without loss of accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Free space optical (FSO) communication links can experience extreme signal degradation due to atmospheric turbulence induced spatial and temporal irradiance fuctuations (scintillation) in the laser wavefront. In addition, turbulence can cause the laser beam centroid to wander resulting in power fading, and sometimes complete loss of the signal. Spreading of the laser beam and jitter are also artifacts of atmospheric turbulence. To accurately predict the signal fading that occurs in a laser communication system and to get a true picture of how this affects crucial performance parameters like bit error rate (BER) it is important to analyze the probability density function (PDF) of the integrated irradiance fuctuations at the receiver. In addition, it is desirable to find a theoretical distribution that accurately models these ?uctuations under all propagation conditions. The PDF of integrated irradiance fuctuations is calculated from numerical wave-optic simulations of a laser after propagating through atmospheric turbulence to investigate the evolution of the distribution as the aperture diameter is increased. The simulation data distribution is compared to theoretical gamma-gamma and lognormal PDF models under a variety of scintillation regimes from weak to very strong. Our results show that the gamma-gamma PDF provides a good fit to the simulated data distribution for all aperture sizes studied from weak through moderate scintillation. In strong scintillation, the gamma-gamma PDF is a better fit to the distribution for point-like apertures and the lognormal PDF is a better fit for apertures the size of the atmospheric spatial coherence radius ρ0 or larger. In addition, the PDF of received power from a Gaussian laser beam, which has been adaptively compensated at the transmitter before propagation to the receiver of a FSO link in the moderate scintillation regime is investigated. The complexity of the adaptive optics (AO) system is increased in order to investigate the changes in the distribution of the received power and how this affects the BER. For the 10 km link, due to the non-reciprocal nature of the propagation path the optimal beam to transmit is unknown. These results show that a low-order level of complexity in the AO provides a better estimate for the optimal beam to transmit than a higher order for non-reciprocal paths. For the 20 km link distance it was found that, although minimal, all AO complexity levels provided an equivalent improvement in BER and that no AO complexity provided the correction needed for the optimal beam to transmit. Finally, the temporal power spectral density of received power from a FSO communication link is investigated. Simulated and experimental results for the coherence time calculated from the temporal correlation function are presented. Results for both simulation and experimental data show that the coherence time increases as the receiving aperture diameter increases. For finite apertures the coherence time increases as the communication link distance is increased. We conjecture that this is due to the increasing speckle size within the pupil plane of the receiving aperture for an increasing link distance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Space Based Solar Power satellites use solar arrays to generate clean, green, and renewable electricity in space and transmit it to earth via microwave, radiowave or laser beams to corresponding receivers (ground stations). These traditionally are large structures orbiting around earth at the geo-synchronous altitude. This thesis introduces a new architecture for a Space Based Solar Power satellite constellation. The proposed concept reduces the high cost involved in the construction of the space satellite and in the multiple launches to the geo-synchronous altitude. The proposed concept is a constellation of Low Earth Orbit satellites that are smaller in size than the conventional system. For this application a Repeated Sun-Synchronous Track Circular Orbit is considered (RSSTO). In these orbits, the spacecraft re-visits the same locations on earth periodically every given desired number of days with the line of nodes of the spacecraft’s orbit fixed relative to the Sun. A wide range of solutions are studied, and, in this thesis, a two-orbit constellation design is chosen and simulated. The number of satellites is chosen based on the electric power demands in a given set of global cities. The orbits of the satellites are designed such that their ground tracks visit a maximum number of ground stations during the revisit period. In the simulation, the locations of the ground stations are chosen close to big cities, in USA and worldwide, so that the space power constellation beams down power directly to locations of high electric power demands. The j2 perturbations are included in the mathematical model used in orbit design. The Coverage time of each spacecraft over a ground site and the gap time between two consecutive spacecrafts visiting a ground site are simulated in order to evaluate the coverage continuity of the proposed solar power constellation. It has been observed from simulations that there always periods in which s spacecraft does not communicate with any ground station. For this reason, it is suggested that each satellite in the constellation be equipped with power storage components so that it can store power for later transmission. This thesis presents a method for designing the solar power constellation orbits such that the number of ground stations visited during the given revisit period is maximized. This leads to maximizing the power transmission to ground stations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nearly 22 million Americans operate as shift workers, and shift work has been linked to the development of cardiovascular disease (CVD). This study is aimed at identifying pivotal risk factors of CVD by assessing 24 hour ambulatory blood pressure, state anxiety levels and sleep patterns in 12 hour fixed shift workers. We hypothesized that night shift work would negatively affect blood pressure regulation, anxiety levels and sleep patterns. A total of 28 subjects (ages 22-60) were divided into two groups: 12 hour fixed night shift workers (n=15) and 12 hour fixed day shift workers (n=13). 24 hour ambulatory blood pressure measurements (Space Labs 90207) were taken twice: once during a regular work day and once on a non-work day. State anxiety levels were assessed on both test days using the Speilberger’s State Trait Anxiety Inventory. Total sleep time (TST) was determined using self recorded sleep diary. Night shift workers demonstrated increases in 24 hour systolic (122 ± 2 to 126 ± 2 mmHg, P=0.012); diastolic (75 ± 1 to 79 ± 2 mmHg, P=0.001); and mean arterial pressures (90 ± 2 to 94 ± 2mmHg, P<0.001) during work days compared to off days. In contrast, 24 hour blood pressures were similar during work and off days in day shift workers. Night shift workers reported less TST on work days versus off days (345 ± 16 vs. 552 ± 30 min; P<0.001), whereas day shift workers reported similar TST during work and off days (475 ± 16 minutes to 437 ± 20 minutes; P=0.231). State anxiety scores did not differ between the groups or testing days (time*group interaction P=0.248), suggesting increased 24 hour blood pressure during night shift work is related to decreased TST, not short term anxiety. Our findings suggest that fixed night shift work causes disruption of the normal sleep-wake cycle negatively affecting acute blood pressure regulation, which may increase the long-term risk for CVD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of optimal design of a multi-gravity-assist space trajectories, with free number of deep space maneuvers (MGADSM) poses multi-modal cost functions. In the general form of the problem, the number of design variables is solution dependent. To handle global optimization problems where the number of design variables varies from one solution to another, two novel genetic-based techniques are introduced: hidden genes genetic algorithm (HGGA) and dynamic-size multiple population genetic algorithm (DSMPGA). In HGGA, a fixed length for the design variables is assigned for all solutions. Independent variables of each solution are divided into effective and ineffective (hidden) genes. Hidden genes are excluded in cost function evaluations. Full-length solutions undergo standard genetic operations. In DSMPGA, sub-populations of fixed size design spaces are randomly initialized. Standard genetic operations are carried out for a stage of generations. A new population is then created by reproduction from all members based on their relative fitness. The resulting sub-populations have different sizes from their initial sizes. The process repeats, leading to increasing the size of sub-populations of more fit solutions. Both techniques are applied to several MGADSM problems. They have the capability to determine the number of swing-bys, the planets to swing by, launch and arrival dates, and the number of deep space maneuvers as well as their locations, magnitudes, and directions in an optimal sense. The results show that solutions obtained using the developed tools match known solutions for complex case studies. The HGGA is also used to obtain the asteroids sequence and the mission structure in the global trajectory optimization competition (GTOC) problem. As an application of GA optimization to Earth orbits, the problem of visiting a set of ground sites within a constrained time frame is solved. The J2 perturbation and zonal coverage are considered to design repeated Sun-synchronous orbits. Finally, a new set of orbits, the repeated shadow track orbits (RSTO), is introduced. The orbit parameters are optimized such that the shadow of a spacecraft on the Earth visits the same locations periodically every desired number of days.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the insatiable curiosity of human beings to explore the universe and our solar system, it is essential to benefit from larger propulsion capabilities to execute efficient transfers and carry more scientific equipment. In the field of space trajectory optimization the fundamental advances in using low-thrust propulsion and exploiting the multi-body dynamics has played pivotal role in designing efficient space mission trajectories. The former provides larger cumulative momentum change in comparison with the conventional chemical propulsion whereas the latter results in almost ballistic trajectories with negligible amount of propellant. However, the problem of space trajectory design translates into an optimal control problem which is, in general, time-consuming and very difficult to solve. Therefore, the goal of the thesis is to address the above problem by developing a methodology to simplify and facilitate the process of finding initial low-thrust trajectories in both two-body and multi-body environments. This initial solution will not only provide mission designers with a better understanding of the problem and solution but also serves as a good initial guess for high-fidelity optimal control solvers and increases their convergence rate. Almost all of the high-fidelity solvers enjoy the existence of an initial guess that already satisfies the equations of motion and some of the most important constraints. Despite the nonlinear nature of the problem, it is sought to find a robust technique for a wide range of typical low-thrust transfers with reduced computational intensity. Another important aspect of our developed methodology is the representation of low-thrust trajectories by Fourier series with which the number of design variables reduces significantly. Emphasis is given on simplifying the equations of motion to the possible extent and avoid approximating the controls. These facts contribute to speeding up the solution finding procedure. Several example applications of two and three-dimensional two-body low-thrust transfers are considered. In addition, in the multi-body dynamic, and in particular the restricted-three-body dynamic, several Earth-to-Moon low-thrust transfers are investigated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Back-in-time debuggers are extremely useful tools for identifying the causes of bugs, as they allow us to inspect the past states of objects no longer present in the current execution stack. Unfortunately the "omniscient" approaches that try to remember all previous states are impractical because they either consume too much space or they are far too slow. Several approaches rely on heuristics to limit these penalties, but they ultimately end up throwing out too much relevant information. In this paper we propose a practical approach to back-in-time debugging that attempts to keep track of only the relevant past data. In contrast to other approaches, we keep object history information together with the regular objects in the application memory. Although seemingly counter-intuitive, this approach has the effect that past data that is not reachable from current application objects (and hence, no longer relevant) is automatically garbage collected. In this paper we describe the technical details of our approach, and we present benchmarks that demonstrate that memory consumption stays within practical bounds. Furthermore since our approach works at the virtual machine level, the performance penalty is significantly better than with other approaches.