915 resultados para reference model
Resumo:
* The research has been partially supported by INFRAWEBS - IST FP62003/IST/2.3.2.3 Research Project No. 511723 and “Technologies of the Information Society for Knowledge Processing and Management” - IIT-BAS Research Project No. 010061.
Resumo:
This study extends a previous research concerning intervertebral motion registration by means of 2D dynamic fluoroscopy to obtain a more comprehensive 3D description of vertebral kinematics. The problem of estimating the 3D rigid pose of a CT volume of a vertebra from its 2D X-ray fluoroscopy projection is addressed. 2D-3D registration is obtained maximising a measure of similarity between Digitally Reconstructed Radiographs (obtained from the CT volume) and real fluoroscopic projection. X-ray energy correction was performed. To assess the method a calibration model was realised a sheep dry vertebra was rigidly fixed to a frame of reference including metallic markers. Accurate measurement of 3D orientation was obtained via single-camera calibration of the markers and held as true 3D vertebra position; then, vertebra 3D pose was estimated and results compared. Error analysis revealed accuracy of the order of 0.1 degree for the rotation angles of about 1mm for displacements parallel to the fluoroscopic plane, and of order of 10mm for the orthogonal displacement. © 2010 P. Bifulco et al.
Resumo:
In this paper a full analytic model for pause intensity (PI), a no-reference metric for video quality assessment, is presented. The model is built upon the video play out buffer behavior at the client side and also encompasses the characteristics of a TCP network. Video streaming via TCP produces impairments in play continuity, which are not typically reflected in current objective metrics such as PSNR and SSIM. Recently the buffer under run frequency/probability has been used to characterize the buffer behavior and as a measurement for performance optimization. But we show, using subjective testing, that under run frequency cannot reflect the viewers' quality of experience for TCP based streaming. We also demonstrate that PI is a comprehensive metric made up of a combination of phenomena observed in the play out buffer. The analytical model in this work is verified with simulations carried out on ns-2, showing that the two results are closely matched. The effectiveness of the PI metric has also been proved by subjective testing on a range of video clips, where PI values exhibit a good correlation with the viewers' opinion scores. © 2012 IEEE.
Resumo:
Setting out from the database of Operophtera brumata, L. in between 1973 and 2000 due to the Light Trap Network in Hungary, we introduce a simple theta-logistic population dynamical model based on endogenous and exogenous factors, only. We create an indicator set from which we can choose some elements with which we can improve the fitting results the most effectively. Than we extend the basic simple model with additive climatic factors. The parameter optimization is based on the minimized root mean square error. The best model is chosen according to the Akaike Information Criterion. Finally we run the calibrated extended model with daily outputs of the regional climate model RegCM3.1, regarding 1961-1990 as reference period and 2021-2050 with 2071-2100 as future predictions. The results of the three time intervals are fitted with Beta distributions and compared statistically. The expected changes are discussed.
Resumo:
The objective of this study was to develop a model to predict transport and fate of gasoline components of environmental concern in the Miami River by mathematically simulating the movement of dissolved benzene, toluene, xylene (BTX), and methyl-tertiary-butyl ether (MTBE) occurring from minor gasoline spills in the inter-tidal zone of the river. Computer codes were based on mathematical algorithms that acknowledge the role of advective and dispersive physical phenomena along the river and prevailing phase transformations of BTX and MTBE. Phase transformations included volatilization and settling. ^ The model used a finite-difference scheme of steady-state conditions, with a set of numerical equations that was solved by two numerical methods: Gauss-Seidel and Jacobi iterations. A numerical validation process was conducted by comparing the results from both methods with analytical and numerical reference solutions. Since similar trends were achieved after the numerical validation process, it was concluded that the computer codes algorithmically were correct. The Gauss-Seidel iteration yielded at a faster convergence rate than the Jacobi iteration. Hence, the mathematical code was selected to further develop the computer program and software. The model was then analyzed for its sensitivity. It was found that the model was very sensitive to wind speed but not to sediment settling velocity. ^ A computer software was developed with the model code embedded. The software was provided with two major user-friendly visualized forms, one to interface with the database files and the other to execute and present the graphical and tabulated results. For all predicted concentrations of BTX and MTBE, the maximum concentrations were over an order of magnitude lower than current drinking water standards. It should be pointed out, however, that smaller concentrations than the latter reported standards and values, although not harmful to humans, may be very harmful to organisms of the trophic levels of the Miami River ecosystem and associated waters. This computer model can be used for the rapid assessment and management of the effects of minor gasoline spills on inter-tidal riverine water quality. ^
Resumo:
The main objective of this work is to develop a quasi three-dimensional numerical model to simulate stony debris flows, considering a continuum fluid phase, composed by water and fine sediments, and a non-continuum phase including large particles, such as pebbles and boulders. Large particles are treated in a Lagrangian frame of reference using the Discrete Element Method, the fluid phase is based on the Eulerian approach, using the Finite Element Method to solve the depth-averaged Navier-Stokes equations in two horizontal dimensions. The particle’s equations of motion are in three dimensions. The model simulates particle-particle collisions and wall-particle collisions, taking into account that particles are immersed in a fluid. Bingham and Cross rheological models are used for the continuum phase. Both formulations provide very stable results, even in the range of very low shear rates. Bingham formulation is better able to simulate the stopping stage of the fluid when applied shear stresses are low. Results of numerical simulations have been compared with data from laboratory experiments on a flume-fan prototype. Results show that the model is capable of simulating the motion of big particles moving in the fluid flow, handling dense particulate flows and avoiding overlap among particles. An application to simulate debris flow events that occurred in Northern Venezuela in 1999 shows that the model could replicate the main boulder accumulation areas that were surveyed by the USGS. Uniqueness of this research is the integration of mud flow and stony debris movement in a single modeling tool that can be used for planning and management of debris flow prone areas.
Resumo:
The main objective of this work is to develop a quasi three-dimensional numerical model to simulate stony debris flows, considering a continuum fluid phase, composed by water and fine sediments, and a non-continuum phase including large particles, such as pebbles and boulders. Large particles are treated in a Lagrangian frame of reference using the Discrete Element Method, the fluid phase is based on the Eulerian approach, using the Finite Element Method to solve the depth-averaged Navier–Stokes equations in two horizontal dimensions. The particle’s equations of motion are in three dimensions. The model simulates particle-particle collisions and wall-particle collisions, taking into account that particles are immersed in a fluid. Bingham and Cross rheological models are used for the continuum phase. Both formulations provide very stable results, even in the range of very low shear rates. Bingham formulation is better able to simulate the stopping stage of the fluid when applied shear stresses are low. Results of numerical simulations have been compared with data from laboratory experiments on a flume-fan prototype. Results show that the model is capable of simulating the motion of big particles moving in the fluid flow, handling dense particulate flows and avoiding overlap among particles. An application to simulate debris flow events that occurred in Northern Venezuela in 1999 shows that the model could replicate the main boulder accumulation areas that were surveyed by the USGS. Uniqueness of this research is the integration of mud flow and stony debris movement in a single modeling tool that can be used for planning and management of debris flow prone areas.
Resumo:
Continuous high-resolution mass accumulation rates (MAR) and X-ray fluorescence (XRF) measurements from marine sediment records in the Bay of Biscay (NE Atlantic) have allowed the determination of the timing and the amplitude of the 'Fleuve Manche' (Channel River) discharges during glacial stages MIS 10, MIS 8, MIS 6 and MIS 4-2. These results have yielded detailed insight into the Middle and Late Pleistocene glaciations in Europe and the drainage network of the western and central European rivers over the last 350 kyr. This study provides clear evidence that the 'Fleuve Manche' connected the southern North Sea basin with the Bay of Biscay during each glacial period and reveals that 'Fleuve Manche' activity during the glaciations MIS 10 and MIS 8 was significantly less than during MIS 6 and MIS 2. We correlate the significant 'Fleuve Manche' activity, detected during MIS 6 and MIS 2, with the extensive Saalian (Drenthe Substage) and the Weichselian glaciations, respectively, confirming that the major Elsterian glaciation precedes the glacial MIS 10. In detail, massive 'Fleuve Manche' discharges occurred at ca 155 ka (mid-MIS 6) and during Termination I, while no significant discharges are found during Termination II. It is assumed that a substantial retreat of the European ice sheet at ca 155 kyr, followed by the formation of ice-free conditions between the British Isles and Scandinavia until Termination II, allowed meltwater to flow northwards through the North Sea basin during the second part of the MIS 6. We assume that this glacial pattern corresponds to the Warthe Substage glacial maximum, therefore indicating that the data presented here equates to the Drenthe and the Warthe glacial advances at ca 175-160 ka and ca 150-140 ka, respectively. Finally, the correlation of our records with ODP site 980 reveals that massive 'Fleuve Manche' discharges, related to partial or complete melting of the European ice masses, were synchronous with strong decreases in both the rate of deep-water formation and the strength of the Atlantic thermohaline circulation. 'Fleuve Manche' discharges over the last 350 kyr probably participated, with other meltwater sources, in the collapse of the thermohaline circulation by freshening the northern Atlantic surface water.
Resumo:
The Model for Prediction Across Scales (MPAS) is a novel set of Earth system simulation components and consists of an atmospheric model, an ocean model and a land-ice model. Its distinct features are the use of unstructured Voronoi meshes and C-grid discretisation to address shortcomings of global models on regular grids and the use of limited area models nested in a forcing data set, with respect to parallel scalability, numerical accuracy and physical consistency. This concept allows one to include the feedback of regional land use information on weather and climate at local and global scales in a consistent way, which is impossible to achieve with traditional limited area modelling approaches. Here, we present an in-depth evaluation of MPAS with regards to technical aspects of performing model runs and scalability for three medium-size meshes on four different high-performance computing (HPC) sites with different architectures and compilers. We uncover model limitations and identify new aspects for the model optimisation that are introduced by the use of unstructured Voronoi meshes. We further demonstrate the model performance of MPAS in terms of its capability to reproduce the dynamics of the West African monsoon (WAM) and its associated precipitation in a pilot study. Constrained by available computational resources, we compare 11-month runs for two meshes with observations and a reference simulation from the Weather Research and Forecasting (WRF) model. We show that MPAS can reproduce the atmospheric dynamics on global and local scales in this experiment, but identify a precipitation excess for the West African region. Finally, we conduct extreme scaling tests on a global 3?km mesh with more than 65 million horizontal grid cells on up to half a million cores. We discuss necessary modifications of the model code to improve its parallel performance in general and specific to the HPC environment. We confirm good scaling (70?% parallel efficiency or better) of the MPAS model and provide numbers on the computational requirements for experiments with the 3?km mesh. In doing so, we show that global, convection-resolving atmospheric simulations with MPAS are within reach of current and next generations of high-end computing facilities.
Resumo:
Stable isotope records for carbon and oxygen in bulk carbonates, carbon in bulk organic matter, and for total and chromium-reducible sulfur in a lacustrine sediment core from Lake Steisslingen (Southwest Germany) show several distinct and abrupt shifts during the last 15,000 years. Variations in the isotopic composition of authigenic carbonates indicate two major phases in the lake history. In the pre-Holocene, the hydrological budget of the lake was apparently stable. Variations of delta18O values of authigenic carbonates were, therefore, dominantly controlled by temperature changes. A decrease in the delta18Ocarb values of about 2 per mil at the Allerød/Younger Dryas transition is interpreted as a drop in mean annual air temperatures of approximately 5°C. An abrupt temperature increase of a similar magnitude is inferred at the Younger Dryas/Preboreal boundary. Throughout most of the Holocene, the isotopic composition of authigenic carbonates was influenced by marked changes in the hydrological budget of the lake. A major positive excursion in the delta13Ccarb and delta18Ocarb values at the beginning of the Atlantic and a smaller one in the Preboreal were related to evaporation effects, which indicate that dry climatic conditions must have prevailed at that time. A simultaneous increase in delta13C values of bulk organic matter at the beginning of the Atlantic suggests a high level of productivity in the lake. As a consequence, aqueous sulfate became limited as indicated by variations in the delta34S values of total and chromium-reducible sedimentary sulfur. Therefore, we conclude that the beginning of the Atlantic was characterized not only by dry but also by warm climatic conditions, which triggered a higher productivity in the lake. In the Subatlantic sediments, large variations in carbon, oxygen, and sulfur isotope ratios were observed as a result of human activities, causing considerable perturbations in the biogeochemical element cycling of Lake Steisslingen. Results obtained by the study of the continuous 15 ka record of Lake Steisslingen document clearly that isotopic proxy data from lacustrine sediments can provide useful information on environmental and climatic changes of local, regional, and in the case of the Younger Dryas event, of even hemispherical significance.
Resumo:
The recently proposed global monsoon hypothesis interprets monsoon systems as part of one global-scale atmospheric overturning circulation, implying a connection between the regional monsoon systems and an in-phase behaviour of all northern hemispheric monsoons on annual timescales (Trenberth et al., 2000). Whether this concept can be applied to past climates and variability on longer timescales is still under debate, because the monsoon systems exhibit different regional characteristics such as different seasonality (i.e. onset, peak, and withdrawal). To investigate the interconnection of different monsoon systems during the pre-industrial Holocene, five transient global climate model simulations have been analysed with respect to the rainfall trend and variability in different sub-domains of the Afro-Asian monsoon region. Our analysis suggests that on millennial timescales with varying orbital forcing, the monsoons do not behave as a tightly connected global system. According to the models, the Indian and North African monsoons are coupled, showing similar rainfall trend and moderate correlation in rainfall variability in all models. The East Asian monsoon changes independently during the Holocene. The dissimilarities in the seasonality of the monsoon sub-systems lead to a stronger response of the North African and Indian monsoon systems to the Holocene insolation forcing than of the East Asian monsoon and affect the seasonal distribution of Holocene rainfall variations. Within the Indian and North African monsoon domain, precipitation solely changes during the summer months, showing a decreasing Holocene precipitation trend. In the East Asian monsoon region, the precipitation signal is determined by an increasing precipitation trend during spring and a decreasing precipitation change during summer, partly balancing each other. A synthesis of reconstructions and the model results do not reveal an impact of the different seasonality on the timing of the Holocene rainfall optimum in the different sub-monsoon systems. They rather indicate locally inhomogeneous rainfall changes and show, that single palaeo-records should not be used to characterise the rainfall change and monsoon evolution for entire monsoon sub-systems.
Resumo:
We address the problem of 3D-assisted 2D face recognition in scenarios when the input image is subject to degradations or exhibits intra-personal variations not captured by the 3D model. The proposed solution involves a novel approach to learn a subspace spanned by perturbations caused by the missing modes of variation and image degradations, using 3D face data reconstructed from 2D images rather than 3D capture. This is accomplished by modelling the difference in the texture map of the 3D aligned input and reference images. A training set of these texture maps then defines a perturbation space which can be represented using PCA bases. Assuming that the image perturbation subspace is orthogonal to the 3D face model space, then these additive components can be recovered from an unseen input image, resulting in an improved fit of the 3D face model. The linearity of the model leads to efficient fitting. Experiments show that our method achieves very competitive face recognition performance on Multi-PIE and AR databases. We also present baseline face recognition results on a new data set exhibiting combined pose and illumination variations as well as occlusion.
Resumo:
A major weakness among loading models for pedestrians walking on flexible structures proposed in recent years is the various uncorroborated assumptions made in their development. This applies to spatio-temporal characteristics of pedestrian loading and the nature of multi-object interactions. To alleviate this problem, a framework for the determination of localised pedestrian forces on full-scale structures is presented using a wireless attitude and heading reference systems (AHRS). An AHRS comprises a triad of tri-axial accelerometers, gyroscopes and magnetometers managed by a dedicated data processing unit, allowing motion in three-dimensional space to be reconstructed. A pedestrian loading model based on a single point inertial measurement from an AHRS is derived and shown to perform well against benchmark data collected on an instrumented treadmill. Unlike other models, the current model does not take any predefined form nor does it require any extrapolations as to the timing and amplitude of pedestrian loading. In order to assess correctly the influence of the moving pedestrian on behaviour of a structure, an algorithm for tracking the point of application of pedestrian force is developed based on data from a single AHRS attached to a foot. A set of controlled walking tests with a single pedestrian is conducted on a real footbridge for validation purposes. A remarkably good match between the measured and simulated bridge response is found, indeed confirming applicability of the proposed framework.
Resumo:
The architectural transcription factor HMGA2 is abundantly expressed during embryonic development. In several malignant neoplasias including prostate cancer, high re-expression of HMGA2 is correlated with malignancy and poor prognosis. The let-7 miRNA family is described to regulate HMGA2 negatively. The balance of let-7 and HMGA2 is discussed to play a major role in tumour aetiology. To further analyse the role of HMGA2 in prostate cancer a stable and highly reproducible in vitro model system is precondition. Herein we established a canine CT1258-EGFP-HMGA2 prostate cancer cell line stably overexpressing HMGA2 linked to EGFP and in addition the reference cell line CT1258-EGFP expressing solely EGFP to exclude EGFP-induced effects. Both recombinant cell lines were characterised by fluorescence microscopy, flow cytometry and immunocytochemistry. The proliferative effect of ectopically overexpressed HMGA2 was determined via BrdU assays. Comparative karyotyping of the derived and the initial CT1258 cell lines was performed to analyse chromosome consistency. The impact of the ectopic HMGA2 expression on its regulator let-7a was analysed by quantitative real-time PCR. Fluorescence microscopy and immunocytochemistry detected successful expression of the EGFP-HMGA2 fusion protein exclusively accumulating in the nucleus. Gene expression analyses confirmed HMGA2 overexpression in CT1258-EGFP-HMGA2 in comparison to CT1258-EGFP and native cells. Significantly higher let-7a expression levels were found in CT1258-EGFP-HMGA2 and CT1258-EGFP. The BrdU assays detected an increased proliferation of CT1258-HMGA2-EGFP cells compared to CT1258-EGFP and native CT1258. The cytogenetic analyses of CT1258-EGFP and CT1258-EGFP-HMGA2 resulted in a comparable hyperdiploid karyotype as described for native CT1258 cells. To further investigate the impact of recombinant overexpressed HMGA2 on CT1258 cells, other selected targets described to underlie HMGA2 regulation were screened in addition. The new fluorescent CT1258-EGFP-HMGA2 cell line is a stable tool enabling in vitro and in vivo analyses of the HMGA2-mediated effects on cells and the development and pathogenesis of prostate cancer.