9 resultados para System analysis - Data processing
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Methodological evaluation of the proteomic analysis of cardiovascular-tissue material has been performed with a special emphasis on establishing examinations that allow reliable quantitative analysis of silver-stained readouts. Reliability, reproducibility, robustness and linearity were addressed and clarified. In addition, several types of normalization procedures were evaluated and new approaches are proposed. It has been found that the silver-stained readout offers a convenient approach for quantitation if a linear range for gel loading is defined. In addition, a broad range of a 10-fold input (loading 20-200 microg per gel) fulfills the linearity criteria, although at the lowest input (20 microg) a portion of protein species will remain undetected. The method is reliable and reproducible within a range of 65-200 microg input. The normalization procedure using the sum of all spot intensities from a silver-stained 2D pattern has been shown to be less reliable than other approaches, namely, normalization through median or through involvement of interquartile range. A special refinement of the normalization through virtual segmentation of pattern, and calculation of normalization factor for each stratum provides highly satisfactory results. The presented results not only provide evidence for the usefulness of silver-stained gels for quantitative evaluation, but they are directly applicable to the research endeavor of monitoring alterations in cardiovascular pathophysiology.
Resumo:
Successful software systems cope with complexity by organizing classes into packages. However, a particular organization may be neither straightforward nor obvious for a given developer. As a consequence, classes can be misplaced, leading to duplicated code and ripple effects with minor changes effecting multiple packages. We claim that contextual information is the key to rearchitecture a system. Exploiting contextual information, we propose a technique to detect misplaced classes by analyzing how client packages access the classes of a given provider package. We define locality as a measure of the degree to which classes reused by common clients appear in the same package. We then use locality to guide a simulated annealing algorithm to obtain optimal placements of classes in packages. The result is the identification of classes that are candidates for relocation. We apply the technique to three applications and validate the usefulness of our approach via developer interviews.
Resumo:
Many observed time series of the global radiosonde or PILOT networks exist as fragments distributed over different archives. Identifying and merging these fragments can enhance their value for studies on the three-dimensional spatial structure of climate change. The Comprehensive Historical Upper-Air Network (CHUAN version 1.7), which was substantially extended in 2013, and the Integrated Global Radiosonde Archive (IGRA) are the most important collections of upper-air measurements taken before 1958. CHUAN (tracked) balloon data start in 1900, with higher numbers from the late 1920s onward, whereas IGRA data start in 1937. However, a substantial fraction of those measurements have not been taken at synoptic times (preferably 00:00 or 12:00 GMT) and on altitude levels instead of standard pressure levels. To make them comparable with more recent data, the records have been brought to synoptic times and standard pressure levels using state-of-the-art interpolation techniques, employing geopotential information from the National Oceanic and Atmospheric Administration (NOAA) 20th Century Reanalysis (NOAA 20CR). From 1958 onward the European Re-Analysis archives (ERA-40 and ERA-Interim) available at the European Centre for Medium-Range Weather Forecasts (ECMWF) are the main data sources. These are easier to use, but pilot data still have to be interpolated to standard pressure levels. Fractions of the same records distributed over different archives have been merged, if necessary, taking care that the data remain traceable back to their original sources. If possible, station IDs assigned by the World Meteorological Organization (WMO) have been allocated to the station records. For some records which have never been identified by a WMO ID, a local ID above 100 000 has been assigned. The merged data set contains 37 wind records longer than 70 years and 139 temperature records longer than 60 years. It can be seen as a useful basis for further data processing steps, most notably homogenization and gridding, after which it should be a valuable resource for climatological studies. Homogeneity adjustments for wind using the NOAA-20CR as a reference are described in Ramella Pralungo and Haimberger (2014). Reliable homogeneity adjustments for temperature beyond 1958 using a surface-data-only reanalysis such as NOAA-20CR as a reference have yet to be created. All the archives and metadata files are available in ASCII and netCDF format in the PANGAEA archive
Resumo:
Lake water temperature (LWT) is an important driver of lake ecosystems and it has been identified as an indicator of climate change. Consequently, the Global Climate Observing System (GCOS) lists LWT as an essential climate variable. Although for some European lakes long in situ time series of LWT do exist, many lakes are not observed or only on a non-regular basis making these observations insufficient for climate monitoring. Satellite data can provide the information needed. However, only few satellite sensors offer the possibility to analyse time series which cover 25 years or more. The Advanced Very High Resolution Radiometer (AVHRR) is among these and has been flown as a heritage instrument for almost 35 years. It will be carried on for at least ten more years, offering a unique opportunity for satellite-based climate studies. Herein we present a satellite-based lake surface water temperature (LSWT) data set for European water bodies in or near the Alps based on the extensive AVHRR 1 km data record (1989–2013) of the Remote Sensing Research Group at the University of Bern. It has been compiled out of AVHRR/2 (NOAA-07, -09, -11, -14) and AVHRR/3 (NOAA-16, -17, -18, -19 and MetOp-A) data. The high accuracy needed for climate related studies requires careful pre-processing and consideration of the atmospheric state. The LSWT retrieval is based on a simulation-based scheme making use of the Radiative Transfer for TOVS (RTTOV) Version 10 together with ERA-interim reanalysis data from the European Centre for Medium-range Weather Forecasts. The resulting LSWTs were extensively compared with in situ measurements from lakes with various sizes between 14 and 580 km2 and the resulting biases and RMSEs were found to be within the range of −0.5 to 0.6 K and 1.0 to 1.6 K, respectively. The upper limits of the reported errors could be rather attributed to uncertainties in the data comparison between in situ and satellite observations than inaccuracies of the satellite retrieval. An inter-comparison with the standard Moderate-resolution Imaging Spectroradiometer (MODIS) Land Surface Temperature product exhibits RMSEs and biases in the range of 0.6 to 0.9 and −0.5 to 0.2 K, respectively. The cross-platform consistency of the retrieval was found to be within ~ 0.3 K. For one lake, the satellite-derived trend was compared with the trend of in situ measurements and both were found to be similar. Thus, orbital drift is not causing artificial temperature trends in the data set. A comparison with LSWT derived through global sea surface temperature (SST) algorithms shows lower RMSEs and biases for the simulation-based approach. A running project will apply the developed method to retrieve LSWT for all of Europe to derive the climate signal of the last 30 years. The data are available at doi:10.1594/PANGAEA.831007.
Resumo:
Navigation of deep space probes is most commonly operated using the spacecraft Doppler tracking technique. Orbital parameters are determined from a series of repeated measurements of the frequency shift of a microwave carrier over a given integration time. Currently, both ESA and NASA operate antennas at several sites around the world to ensure the tracking of deep space probes. Just a small number of software packages are nowadays used to process Doppler observations. The Astronomical Institute of the University of Bern (AIUB) has recently started the development of Doppler data processing capabilities within the Bernese GNSS Software. This software has been extensively used for Precise Orbit Determination of Earth orbiting satellites using GPS data collected by on-board receivers and for subsequent determination of the Earth gravity field. In this paper, we present the currently achieved status of the Doppler data modeling and orbit determination capabilities in the Bernese GNSS Software using GRAIL data. In particular we will focus on the implemented orbit determination procedure used for the combined analysis of Doppler and intersatellite Ka-band data. We show that even at this earlier stage of the development we can achieve an accuracy of few mHz on two-way S-band Doppler observation and of 2 µm/s on KBRR data from the GRAIL primary mission phase.
Resumo:
Space debris in geostationary orbits may be detected with optical telescopes when the objects are illuminated by the Sun. The advantage compared to Radar can be found in the illumination: radar illuminates the objects and thus the detection sensitivity depletest proportional to the fourth power of the d istance. The German Space Operation Center, GSOC, together with the Astronomical Institute of the University of Bern, AIUB, are setting up a telescope system called SMARTnet to demonstrate the capability of performing geostationary surveillance. Such a telescope system will consist of two telescopes on one mount: a smaller telescope with an aperture of 20cm will serve for fast survey while the larger one, a telescope with an aperture of 50cm, will be used for follow-up observations. The telescopes will be operated by GSOC from Oberpfaffenhofen by the internal monitoring and control system called SMARTnetMAC. The observation plan will be generated by MARTnetPlanning seven days in advance by applying an optimized planning scheduler, taking into account fault time like cloudy nights, priority of objects etc. From each picture taken, stars will be identified and everything not being a star is treated as a possible object. If the same object can be identified on multiple pictures within a short time span, the trace is called a tracklet. In the next step, several tracklets will be correlated to identify individual objects, ephemeris data for these objects are generated and catalogued . This will allow for services like collision avoidance to ensure safe operations for GSOC’s satellites. The complete data processing chain is handled by BACARDI, the backbone catalogue of relational debris information and is presented as a poster.