945 resultados para Data modeling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The South Florida Water Management District (SFWMD) manages and operates numerous water control structures that are subject to scour. In an effort to reduce scour downstream of these gated structures, laboratory experiments were performed to investigate the effect of active air-injection downstream of the terminal structure of a gated spillway on the depth of the scour hole. A literature review involving similar research revealed significant variables such as the ratio of headwater-to-tailwater depths, the diffuser angle, sediment uniformity, and the ratio of air-to-water volumetric discharge values. The experimental design was based on the analysis of several of these non-dimensional parameters. Bed scouring at stilling basins downstream of gated spillways has been identified as posing a serious risk to the spillway’s structural stability. Although this type of scour has been studied in the past, it continues to represent a real threat to water control structures and requires additional attention. A hydraulic scour channel comprised of a head tank, flow straightening section, gated spillway, stilling basin, scour section, sediment trap, and tail-tank was used to further this analysis. Experiments were performed in a laboratory channel consisting of a 1:30 scale model of the SFWMD S65E spillway structure. To ascertain the feasibility of air injection for scour reduction a proof-of-concept study was performed. Experiments were conducted without air entrainment and with high, medium, and low air entrainment rates for high and low headwater conditions. For the cases with no air entrainment it was found that there was excessive scour downstream of the structure due to a downward roller formed upon exiting the downstream sill of the stilling basin. When air was introduced vertically just downstream of, and at the same level as, the stilling basin sill, it was found that air entrainment does reduce scour depth by up to 58% depending on the air flow rate, but shifts the deepest scour location to the sides of the channel bed instead of the center. Various hydraulic flow conditions were tested without air injection to verify which scenario caused more scour. That scenario, uncontrolled free, in which water does not contact the gate and the water elevation in the stilling basin is lower than the spillway crest, would be used for the remainder of experiments testing air injection. Various air flow rates, diffuser elevations, air hole diameters, air hole spacings, diffuser angles and widths were tested in over 120 experiments. Optimal parameters include air injection at a rate that results in a water-to-air ratio of 0.28, air holes 1.016mm in diameter the entire width of the stilling basin, and a vertically orientated injection pattern. Detailed flow measurements were collected for one case using air injection and one without. An identical flow scenario was used for each experiment, namely that of a high flow rate and upstream headwater depth and a low tailwater depth. Equilibrium bed scour and velocity measurements were taken using an Acoustic Doppler Velocimeter at nearly 3000 points. Velocity data was used to construct a vector plot in order to identify which flow components contribute to the scour hole. Additionally, turbulence parameters were calculated in an effort to help understand why air-injection reduced bed scour. Turbulence intensities, normalized mean flow, normalized kinetic energy, and anisotropy of turbulence plots were constructed. A clear trend emerged that showed air-injection reduces turbulence near the bed and therefore reduces scour potential.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Power transformers are key components of the power grid and are also one of the most subjected to a variety of power system transients. The failure of a large transformer can cause severe monetary losses to a utility, thus adequate protection schemes are of great importance to avoid transformer damage and maximize the continuity of service. Computer modeling can be used as an efficient tool to improve the reliability of a transformer protective relay application. Unfortunately, transformer models presently available in commercial software lack completeness in the representation of several aspects such as internal winding faults, which is a common cause of transformer failure. It is also important to adequately represent the transformer at frequencies higher than the power frequency for a more accurate simulation of switching transients since these are a well known cause for the unwanted tripping of protective relays. This work develops new capabilities for the Hybrid Transformer Model (XFMR) implemented in ATPDraw to allow the representation of internal winding faults and slow-front transients up to 10 kHz. The new model can be developed using any of two sources of information: 1) test report data and 2) design data. When only test-report data is available, a higher-order leakage inductance matrix is created from standard measurements. If design information is available, a Finite Element Model is created to calculate the leakage parameters for the higher-order model. An analytical model is also implemented as an alternative to FEM modeling. Measurements on 15-kVA 240?/208Y V and 500-kVA 11430Y/235Y V distribution transformers were performed to validate the model. A transformer model that is valid for simulations for frequencies above the power frequency was developed after continuing the division of windings into multiple sections and including a higher-order capacitance matrix. Frequency-scan laboratory measurements were used to benchmark the simulations. Finally, a stability analysis of the higher-order model was made by analyzing the trapezoidal rule for numerical integration as used in ATP. Numerical damping was also added to suppress oscillations locally when discontinuities occurred in the solution. A maximum error magnitude of 7.84% was encountered in the simulated currents for different turn-to-ground and turn-to-turn faults. The FEM approach provided the most accurate means to determine the leakage parameters for the ATP model. The higher-order model was found to reproduce the short-circuit impedance acceptably up to about 10 kHz and the behavior at the first anti-resonant frequency was better matched with the measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mobile Mesh Network based In-Transit Visibility (MMN-ITV) system facilitates global real-time tracking capability for the logistics system. In-transit containers form a multi-hop mesh network to forward the tracking information to the nearby sinks, which further deliver the information to the remote control center via satellite. The fundamental challenge to the MMN-ITV system is the energy constraint of the battery-operated containers. Coupled with the unique mobility pattern, cross-MMN behavior, and the large-spanned area, it is necessary to investigate the energy-efficient communication of the MMN-ITV system thoroughly. First of all, this dissertation models the energy-efficient routing under the unique pattern of the cross-MMN behavior. A new modeling approach, pseudo-dynamic modeling approach, is proposed to measure the energy-efficiency of the routing methods in the presence of the cross-MMN behavior. With this approach, it could be identified that the shortest-path routing and the load-balanced routing is energy-efficient in mobile networks and static networks respectively. For the MMN-ITV system with both mobile and static MMNs, an energy-efficient routing method, energy-threshold routing, is proposed to achieve the best tradeoff between them. Secondly, due to the cross-MMN behavior, neighbor discovery is executed frequently to help the new containers join the MMN, hence, consumes similar amount of energy as that of the data communication. By exploiting the unique pattern of the cross-MMN behavior, this dissertation proposes energy-efficient neighbor discovery wakeup schedules to save up to 60% of the energy for neighbor discovery. Vehicular Ad Hoc Networks (VANETs)-based inter-vehicle communications is by now growingly believed to enhance traffic safety and transportation management with low cost. The end-to-end delay is critical for the time-sensitive safety applications in VANETs, and can be a decisive performance metric for VANETs. This dissertation presents a complete analytical model to evaluate the end-to-end delay against the transmission range and the packet arrival rate. This model illustrates a significant end-to-end delay increase from non-saturated networks to saturated networks. It hence suggests that the distributed power control and admission control protocols for VANETs should aim at improving the real-time capacity (the maximum packet generation rate without causing saturation), instead of the delay itself. Based on the above model, it could be determined that adopting uniform transmission range for every vehicle may hinder the delay performance improvement, since it does not allow the coexistence of the short path length and the low interference. Clusters are proposed to configure non-uniform transmission range for the vehicles. Analysis and simulation confirm that such configuration can enhance the real-time capacity. In addition, it provides an improved trade off between the end-to-end delay and the network capacity. A distributed clustering protocol with minimum message overhead is proposed, which achieves low convergence time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lava flow modeling can be a powerful tool in hazard assessments; however, the ability to produce accurate models is usually limited by a lack of high resolution, up-to-date Digital Elevation Models (DEMs). This is especially obvious in places such as Kilauea Volcano (Hawaii), where active lava flows frequently alter the terrain. In this study, we use a new technique to create high resolution DEMs on Kilauea using synthetic aperture radar (SAR) data from the TanDEM-X (TDX) satellite. We convert raw TDX SAR data into a geocoded DEM using GAMMA software [Werner et al., 2000]. This process can be completed in several hours and permits creation of updated DEMs as soon as new TDX data are available. To test the DEMs, we use the Harris and Rowland [2001] FLOWGO lava flow model combined with the Favalli et al. [2005] DOWNFLOW model to simulate the 3-15 August 2011 eruption on Kilauea's East Rift Zone. Results were compared with simulations using the older, lower resolution 2000 SRTM DEM of Hawaii. Effusion rates used in the model are derived from MODIS thermal infrared satellite imagery. FLOWGO simulations using the TDX DEM produced a single flow line that matched the August 2011 flow almost perfectly, but could not recreate the entire flow field due to the relatively high DEM noise level. The issues with short model flow lengths can be resolved by filtering noise from the DEM. Model simulations using the outdated SRTM DEM produced a flow field that followed a different trajectory to that observed. Numerous lava flows have been emplaced at Kilauea since the creation of the SRTM DEM, leading the model to project flow lines in areas that have since been covered by fresh lava flows. These results show that DEMs can quickly become outdated on active volcanoes, but our new technique offers the potential to produce accurate, updated DEMs for modeling lava flow hazards.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The municipality of San Juan La Laguna, Guatemala is home to approximately 5,200 people and located on the western side of the Lake Atitlán caldera. Steep slopes surround all but the eastern side of San Juan. The Lake Atitlán watershed is susceptible to many natural hazards, but most predictable are the landslides that can occur annually with each rainy season, especially during high-intensity events. Hurricane Stan hit Guatemala in October 2005; the resulting flooding and landslides devastated the Atitlán region. Locations of landslide and non-landslide points were obtained from field observations and orthophotos taken following Hurricane Stan. This study used data from multiple attributes, at every landslide and non-landslide point, and applied different multivariate analyses to optimize a model for landslides prediction during high-intensity precipitation events like Hurricane Stan. The attributes considered in this study are: geology, geomorphology, distance to faults and streams, land use, slope, aspect, curvature, plan curvature, profile curvature and topographic wetness index. The attributes were pre-evaluated for their ability to predict landslides using four different attribute evaluators, all available in the open source data mining software Weka: filtered subset, information gain, gain ratio and chi-squared. Three multivariate algorithms (decision tree J48, logistic regression and BayesNet) were optimized for landslide prediction using different attributes. The following statistical parameters were used to evaluate model accuracy: precision, recall, F measure and area under the receiver operating characteristic (ROC) curve. The algorithm BayesNet yielded the most accurate model and was used to build a probability map of landslide initiation points. The probability map developed in this study was also compared to the results of a bivariate landslide susceptibility analysis conducted for the watershed, encompassing Lake Atitlán and San Juan. Landslides from Tropical Storm Agatha 2010 were used to independently validate this study’s multivariate model and the bivariate model. The ultimate aim of this study is to share the methodology and results with municipal contacts from the author's time as a U.S. Peace Corps volunteer, to facilitate more effective future landslide hazard planning and mitigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the past several decades, it has become apparent that anthropogenic activities have resulted in the large-scale enhancement of the levels of many trace gases throughout the troposphere. More recently, attention has been given to the transport pathway taken by these emissions as they are dispersed throughout the atmosphere. The transport pathway determines the physical characteristics of emissions plumes and therefore plays an important role in the chemical transformations that can occur downwind of source regions. For example, the production of ozone (O3) is strongly dependent upon the transport its precursors undergo. O3 can initially be formed within air masses while still over polluted source regions. These polluted air masses can experience continued O3 production or O3 destruction downwind, depending on the air mass's chemical and transport characteristics. At present, however, there are a number of uncertainties in the relationships between transport and O3 production in the North Atlantic lower free troposphere. The first phase of the study presented here used measurements made at the Pico Mountain observatory and model simulations to determine transport pathways for US emissions to the observatory. The Pico Mountain observatory was established in the summer of 2001 in order to address the need to understand the relationships between transport and O3 production. Measurements from the observatory were analyzed in conjunction with model simulations from the Lagrangian particle dispersion model (LPDM), FLEX-PART, in order to determine the transport pathway for events observed at the Pico Mountain observatory during July 2003. A total of 16 events were observed, 4 of which were analyzed in detail. The transport time for these 16 events varied from 4.5 to 7 days, while the transport altitudes over the ocean ranged from 2-8 km, but were typically less than 3 km. In three of the case studies, eastward advection and transport in a weak warm conveyor belt (WCB) airflow was responsible for the export of North American emissions into the FT, while transport in the FT was governed by easterly winds driven by the Azores/Bermuda High (ABH) and transient northerly lows. In the fourth case study, North American emissions were lofted to 6-8 km in a WCB before being entrained in the same cyclone's dry airstream and transported down to the observatory. The results of this study show that the lower marine FT may provide an important transport environment where O3 production may continue, in contrast to transport in the marine boundary layer, where O3 destruction is believed to dominate. The second phase of the study presented here focused on improving the analysis methods that are available with LPDMs. While LPDMs are popular and useful for the analysis of atmospheric trace gas measurements, identifying the transport pathway of emissions from their source to a receptor (the Pico Mountain observatory in our case) using the standard gridded model output, particularly during complex meteorological scenarios can be difficult can be difficult or impossible. The transport study in phase 1 was limited to only 1 month out of more than 3 years of available data and included only 4 case studies out of the 16 events specifically due to this confounding factor. The second phase of this study addressed this difficulty by presenting a method to clearly and easily identify the pathway taken by only those emissions that arrive at a receptor at a particular time, by combining the standard gridded output from forward (i.e., concentrations) and backward (i.e., residence time) LPDM simulations, greatly simplifying similar analyses. The ability of the method to successfully determine the source-to-receptor pathway, restoring this Lagrangian information that is lost when the data are gridded, is proven by comparing the pathway determined from this method with the particle trajectories from both the forward and backward models. A sample analysis is also presented, demonstrating that this method is more accurate and easier to use than existing methods using standard LPDM products. Finally, we discuss potential future work that would be possible by combining the backward LPDM simulation with gridded data from other sources (e.g., chemical transport models) to obtain a Lagrangian sampling of the air that will eventually arrive at a receptor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Propofol and sevoflurane display additivity for gamma-aminobutyric acid receptor activation, loss of consciousness, and tolerance of skin incision. Information about their interaction regarding electroencephalographic suppression is unavailable. This study examined this interaction as well as the interaction on the probability of tolerance of shake and shout and three noxious stimulations by using a response surface methodology. METHODS: Sixty patients preoperatively received different combined concentrations of propofol (0-12 microg/ml) and sevoflurane (0-3.5 vol.%) according to a crisscross design (274 concentration pairs, 3 to 6 per patient). After having reached pseudo-steady state, the authors recorded bispectral index, state and response entropy and the response to shake and shout, tetanic stimulation, laryngeal mask airway insertion, and laryngoscopy. For the analysis of the probability of tolerance by logistic regression, a Greco interaction model was used. For the separate analysis of bispectral index, state and response entropy suppression, a fractional Emax Greco model was used. All calculations were performed with NONMEM V (GloboMax LLC, Hanover, MD). RESULTS: Additivity was found for all endpoints, the Ce(50, PROP)/Ce(50, SEVO) for bispectral index suppression was 3.68 microg. ml(-1)/ 1.53 vol.%, for tolerance of shake and shout 2.34 microg . ml(-1)/ 1.03 vol.%, tetanic stimulation 5.34 microg . ml(-1)/ 2.11 vol.%, laryngeal mask airway insertion 5.92 microg. ml(-1) / 2.55 vol.%, and laryngoscopy 6.55 microg. ml(-1)/2.83 vol.%. CONCLUSION: For both electroencephalographic suppression and tolerance to stimulation, the interaction of propofol and sevoflurane was identified as additive. The response surface data can be used for more rational dose finding in case of sequential and coadministration of propofol and sevoflurane.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-resolution and highly precise age models for recent lake sediments (last 100–150 years) are essential for quantitative paleoclimate research. These are particularly important for sedimentological and geochemical proxies, where transfer functions cannot be established and calibration must be based upon the relation of sedimentary records to instrumental data. High-precision dating for the calibration period is most critical as it determines directly the quality of the calibration statistics. Here, as an example, we compare radionuclide age models obtained on two high-elevation glacial lakes in the Central Chilean Andes (Laguna Negra: 33°38′S/70°08′W, 2,680 m a.s.l. and Laguna El Ocho: 34°02′S/70°19′W, 3,250 m a.s.l.). We show the different numerical models that produce accurate age-depth chronologies based on 210Pb profiles, and we explain how to obtain reduced age-error bars at the bottom part of the profiles, i.e., typically around the end of the 19th century. In order to constrain the age models, we propose a method with five steps: (i) sampling at irregularly-spaced intervals for 226Ra, 210Pb and 137Cs depending on the stratigraphy and microfacies, (ii) a systematic comparison of numerical models for the calculation of 210Pb-based age models: constant flux constant sedimentation (CFCS), constant initial concentration (CIC), constant rate of supply (CRS) and sediment isotope tomography (SIT), (iii) numerical constraining of the CRS and SIT models with the 137Cs chronomarker of AD 1964 and, (iv) step-wise cross-validation with independent diagnostic environmental stratigraphic markers of known age (e.g., volcanic ash layer, historical flood and earthquakes). In both examples, we also use airborne pollutants such as spheroidal carbonaceous particles (reflecting the history of fossil fuel emissions), excess atmospheric Cu deposition (reflecting the production history of a large local Cu mine), and turbidites related to historical earthquakes. Our results show that the SIT model constrained with the 137Cs AD 1964 peak performs best over the entire chronological profile (last 100–150 years) and yields the smallest standard deviations for the sediment ages. Such precision is critical for the calibration statistics, and ultimately, for the quality of the quantitative paleoclimate reconstruction. The systematic comparison of CRS and SIT models also helps to validate the robustness of the chronologies in different sections of the profile. Although surprisingly poorly known and under-explored in paleolimnological research, the SIT model has a great potential in paleoclimatological reconstructions based on lake sediments

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monte Carlo simulation was used to evaluate properties of a simple Bayesian MCMC analysis of the random effects model for single group Cormack-Jolly-Seber capture-recapture data. The MCMC method is applied to the model via a logit link, so parameters p, S are on a logit scale, where logit(S) is assumed to have, and is generated from, a normal distribution with mean μ and variance σ2 . Marginal prior distributions on logit(p) and μ were independent normal with mean zero and standard deviation 1.75 for logit(p) and 100 for μ ; hence minimally informative. Marginal prior distribution on σ2 was placed on τ2=1/σ2 as a gamma distribution with α=β=0.001 . The study design has 432 points spread over 5 factors: occasions (t) , new releases per occasion (u), p, μ , and σ . At each design point 100 independent trials were completed (hence 43,200 trials in total), each with sample size n=10,000 from the parameter posterior distribution. At 128 of these design points comparisons are made to previously reported results from a method of moments procedure. We looked at properties of point and interval inference on μ , and σ based on the posterior mean, median, and mode and equal-tailed 95% credibility interval. Bayesian inference did very well for the parameter μ , but under the conditions used here, MCMC inference performance for σ was mixed: poor for sparse data (i.e., only 7 occasions) or σ=0 , but good when there were sufficient data and not small σ .

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This tutorial gives a step by step explanation of how one uses experimental data to construct a biologically realistic multicompartmental model. Special emphasis is given on the many ways that this process can be imprecise. The tutorial is intended for both experimentalists who want to get into computer modeling and for computer scientists who use abstract neural network models but are curious about biological realistic modeling. The tutorial is not dependent on the use of a specific simulation engine, but rather covers the kind of data needed for constructing a model, how they are used, and potential pitfalls in the process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mixed Reality (MR) aims to link virtual entities with the real world and has many applications such as military and medical domains [JBL+00, NFB07]. In many MR systems and more precisely in augmented scenes, one needs the application to render the virtual part accurately at the right time. To achieve this, such systems acquire data related to the real world from a set of sensors before rendering virtual entities. A suitable system architecture should minimize the delays to keep the overall system delay (also called end-to-end latency) within the requirements for real-time performance. In this context, we propose a compositional modeling framework for MR software architectures in order to specify, simulate and validate formally the time constraints of such systems. Our approach is first based on a functional decomposition of such systems into generic components. The obtained elements as well as their typical interactions give rise to generic representations in terms of timed automata. A whole system is then obtained as a composition of such defined components. To write specifications, a textual language named MIRELA (MIxed REality LAnguage) is proposed along with the corresponding compilation tools. The generated output contains timed automata in UPPAAL format for simulation and verification of time constraints. These automata may also be used to generate source code skeletons for an implementation on a MR platform. The approach is illustrated first on a small example. A realistic case study is also developed. It is modeled by several timed automata synchronizing through channels and including a large number of time constraints. Both systems have been simulated in UPPAAL and checked against the required behavioral properties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Forests near the Mediterranean coast have been shaped by millennia of human disturbance. Consequently, ecological studies relying on modern observations or historical records may have difficulty assessing natural vegetation dynamics under current and future climate. We combined a sedimentary pollen record from Lago di Massacciucoli, Tuscany, Italy with simulations from the LandClim dynamic vegetation model to determine what vegetation preceded intense human disturbance, how past changes in vegetation relate to fire and browsing, and the potential of an extinct vegetation type under present climate. We simulated vegetation dynamics near Lago di Massaciucoli for the last 7,000 years using a local chironomid-inferred temperature reconstruction with combinations of three fire regimes (small infrequent, large infrequent, small frequent) and three browsing intensities (no browsing, light browsing, and moderate browsing), and compared model output to pollen data. Simulations with low disturbance support pollen-inferred evidence for a mixed forest dominated by Quercus ilex (a Mediterranean species) and Abies alba (a montane species). Whereas pollen data record the collapse of A. alba after 6000 cal yr bp, simulated populations expanded with declining summer temperatures during the late Holocene. Simulations with increased fire and browsing are consistent with evidence for expansion by deciduous species after A. alba collapsed. According to our combined paleo-environmental and modeling evidence, mixed Q. ilex and A. alba forests remain possible with current climate and limited disturbance, and provide a viable management objective for ecosystems near the Mediterranean coast and in regions that are expected to experience a mediterranean-type climate in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In terms of atmospheric impact, the volcanic eruption of Mt. Pinatubo (1991) is the best characterized large eruption on record. We investigate here the model-derived stratospheric warming following the Pinatubo eruption as derived from SAGE II extinction data including recent improvements in the processing algorithm. This method, termed SAGE_4λ, makes use of the four wavelengths (385, 452, 525 and 1024 nm) of the SAGE II data when available, and uses a data-filling procedure in the opacity-induced "gap" regions. Using SAGE_4λ, we derived aerosol size distributions that properly reproduce extinction coefficients also at much longer wavelengths. This provides a good basis for calculating the absorption of terrestrial infrared radiation and the resulting stratospheric heating. However, we also show that the use of this data set in a global chemistry–climate model (CCM) still leads to stronger aerosol-induced stratospheric heating than observed, with temperatures partly even higher than the already too high values found by many models in recent general circulation model (GCM) and CCM intercomparisons. This suggests that the overestimation of the stratospheric warming after the Pinatubo eruption may not be ascribed to an insufficient observational database but instead to using outdated data sets, to deficiencies in the implementation of the forcing data, or to radiative or dynamical model artifacts. Conversely, the SAGE_4λ approach reduces the infrared absorption in the tropical tropopause region, resulting in a significantly better agreement with the post-volcanic temperature record at these altitudes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Consecrated in 1297 as the monastery church of the four years earlier founded St. Catherine’s monastery, the Gothic Church of St. Catherine was largely destroyed in a devastating bombing raid on January 2nd 1945. To counteract the process of disintegration, the departments of geo-information and lower monument protection authority of the City of Nuremburg decided to getting done a three dimensional building model of the Church of St. Catherine’s. A heterogeneous set of data was used for preparation of a parametric architectural model. In effect the modeling of historic buildings can profit from the so called BIM method (Building Information Modeling), as the necessary structuring of the basic data renders it into very sustainable information. The resulting model is perfectly suited to deliver a vivid impression of the interior and exterior of this former mendicant orders’ church to present observers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An ever increasing number of low Earth orbiting (LEO) satellites is, or will be, equipped with retro-reflectors for Satellite Laser Ranging (SLR) and on-board receivers to collect observations from Global Navigation Satellite Systems (GNSS) such as the Global Positioning Sys- tem (GPS) and the Russian GLONASS and the European Galileo systems in the future. At the Astronomical Insti- tute of the University of Bern (AIUB) LEO precise or- bit determination (POD) using either GPS or SLR data is performed for a wide range of applications for satellites at different altitudes. For this purpose the classical numeri- cal integration techniques, as also used for dynamic orbit determination of satellites at high altitudes, are extended by pseudo-stochastic orbit modeling techniques to effi- ciently cope with potential force model deficiencies for satellites at low altitudes. Accuracies of better than 2 cm may be achieved by pseudo-stochastic orbit modeling for satellites at very low altitudes such as for the GPS-based POD of the Gravity field and steady-state Ocean Circula- tion Explorer (GOCE).