15 resultados para 3D geological modelling

em CentAUR: Central Archive University of Reading - UK


Relevância:

90.00% 90.00%

Publicador:

Resumo:

*** Purpose – Computer tomography (CT) for 3D reconstruction entails a huge number of coplanar fan-beam projections for each of a large number of 2D slice images, and excessive radiation intensities and dosages. For some applications its rate of throughput is also inadequate. A technique for overcoming these limitations is outlined. *** Design methodology/approach – A novel method to reconstruct 3D surface models of objects is presented, using, typically, ten, 2D projective images. These images are generated by relative motion between this set of objects and a set of ten fanbeam X-ray sources and sensors, with their viewing axes suitably distributed in 2D angular space. *** Findings – The method entails a radiation dosage several orders of magnitude lower than CT, and requires far less computational power. Experimental results are given to illustrate the capability of the technique *** Practical implications – The substantially lower cost of the method and, more particularly, its dramatically lower irradiation make it relevant to many applications precluded by current techniques *** Originality/value – The method can be used in many applications such as aircraft hold-luggage screening, 3D industrial modelling and measurement, and it should also have important applications to medical diagnosis and surgery.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Virtual reality has the potential to improve visualisation of building design and construction, but its implementation in the industry has yet to reach maturity. Present day translation of building data to virtual reality is often unidirectional and unsatisfactory. Three different approaches to the creation of models are identified and described in this paper. Consideration is given to the potential of both advances in computer-aided design and the emerging standards for data exchange to facilitate an integrated use of virtual reality. Commonalities and differences between computer-aided design and virtual reality packages are reviewed, and trials of current system, are described. The trials have been conducted to explore the technical issues related to the integrated use of CAD and virtual environments within the house building sector of the construction industry and to investigate the practical use of the new technology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The nature and magnitude of climatic variability during the period of middle Pliocene warmth (ca 3.29–2.97 Ma) is poorly understood. We present a suite of palaeoclimate modelling experiments incorporating an advanced atmospheric general circulation model (GCM), coupled to a Q-flux ocean model for 3.29, 3.12 and 2.97 Ma BP. Astronomical solutions for the periods in question were derived from the Berger and Loutre BL2 astronomical solution. Boundary conditions, excluding sea surface temperatures (SSTs) which were predicted by the slab-ocean model, were provided from the USGS PRISM2 2°×2° digital data set. The model results indicate that little annual variation (0.5°C) in SSTs, relative to a ‘control’ experiment, occurred during the middle Pliocene in response to the altered orbital configurations. Annual surface air temperatures also displayed little variation. Seasonally, surface air temperatures displayed a trend of cooler temperatures during December, January and February, and warmer temperatures during June, July and August. This pattern is consistent with altered seasonality resulting from the prescribed orbital configurations. Precipitation changes follow the seasonal trend observed for surface air temperature. Compared to present-day, surface wind strength and wind stress over the North Atlantic, North Pacific and Southern Ocean remained greater in each of the Pliocene experiments. This suggests that wind-driven gyral circulation may have been consistently greater during the middle Pliocene. The trend of climatic variability predicted by the GCM for the middle Pliocene accords with geological data. However, it is unclear if the model correctly simulates the magnitude of the variation. This uncertainty is derived from, (a) the relative insensitivity of the GCM to perturbation in the imposed boundary conditions, (b) a lack of detailed time series data concerning changes to terrestrial ice cover and greenhouse gas concentrations for the middle Pliocene and (c) difficulties in representing the effects of ‘climatic history’ in snap-shot GCM experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two ongoing projects at ESSC that involve the development of new techniques for extracting information from airborne LiDAR data and combining this information with environmental models will be discussed. The first project in conjunction with Bristol University is aiming to improve 2-D river flood flow models by using remote sensing to provide distributed data for model calibration and validation. Airborne LiDAR can provide such models with a dense and accurate floodplain topography together with vegetation heights for parameterisation of model friction. The vegetation height data can be used to specify a friction factor at each node of a model’s finite element mesh. A LiDAR range image segmenter has been developed which converts a LiDAR image into separate raster maps of surface topography and vegetation height for use in the model. Satellite and airborne SAR data have been used to measure flood extent remotely in order to validate the modelled flood extent. Methods have also been developed for improving the models by decomposing the model’s finite element mesh to reflect floodplain features such as hedges and trees having different frictional properties to their surroundings. Originally developed for rural floodplains, the segmenter is currently being extended to provide DEMs and friction parameter maps for urban floods, by fusing the LiDAR data with digital map data. The second project is concerned with the extraction of tidal channel networks from LiDAR. These networks are important features of the inter-tidal zone, and play a key role in tidal propagation and in the evolution of salt-marshes and tidal flats. The study of their morphology is currently an active area of research, and a number of theories related to networks have been developed which require validation using dense and extensive observations of network forms and cross-sections. The conventional method of measuring networks is cumbersome and subjective, involving manual digitisation of aerial photographs in conjunction with field measurement of channel depths and widths for selected parts of the network. A semi-automatic technique has been developed to extract networks from LiDAR data of the inter-tidal zone. A multi-level knowledge-based approach has been implemented, whereby low level algorithms first extract channel fragments based mainly on image properties then a high level processing stage improves the network using domain knowledge. The approach adopted at low level uses multi-scale edge detection to detect channel edges, then associates adjacent anti-parallel edges together to form channels. The higher level processing includes a channel repair mechanism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the building industry proceeds in the direction of low impact buildings, research attention is being drawn towards the reduction of carbon dioxide emission and waste. Starting from design and construction to operation and demolition, various building materials are used throughout the whole building lifecycle involving significant energy consumption and waste generation. Building Information Modelling (BIM) is emerging as a tool that can support holistic design-decision making for reducing embodied carbon and waste production in the building lifecycle. This study aims to establish a framework for assessing embodied carbon and waste underpinned by BIM technology. On the basis of current research review, the framework is considered to include functional modules for embodied carbon computation. There are a module for waste estimation, a knowledge-base of construction and demolition methods, a repository of building components information, and an inventory of construction materials’ energy and carbon. Through both static 3D model visualisation and dynamic modelling supported by the framework, embodied energy (carbon), waste and associated costs can be analysed in the boundary of cradle-to-gate, construction, operation, and demolition. The proposed holistic modelling framework provides a possibility to analyse embodied carbon and waste from different building lifecycle perspectives including associated costs. It brings together existing segmented embodied carbon and waste estimation into a unified model, so that interactions between various parameters through the different building lifecycle phases can be better understood. Thus, it can improve design-decision support for optimal low impact building development. The applicability of this framework is anticipated being developed and tested on industrial projects in the near future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a novel algorithm for joint state-parameter estimation using sequential three dimensional variational data assimilation (3D Var) and demonstrate its application in the context of morphodynamic modelling using an idealised two parameter 1D sediment transport model. The new scheme combines a static representation of the state background error covariances with a flow dependent approximation of the state-parameter cross-covariances. For the case presented here, this involves calculating a local finite difference approximation of the gradient of the model with respect to the parameters. The new method is easy to implement and computationally inexpensive to run. Experimental results are positive with the scheme able to recover the model parameters to a high level of accuracy. We expect that there is potential for successful application of this new methodology to larger, more realistic models with more complex parameterisations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

View-based and Cartesian representations provide rival accounts of visual navigation in humans, and here we explore possible models for the view-based case. A visual “homing” experiment was undertaken by human participants in immersive virtual reality. The distributions of end-point errors on the ground plane differed significantly in shape and extent depending on visual landmark configuration and relative goal location. A model based on simple visual cues captures important characteristics of these distributions. Augmenting visual features to include 3D elements such as stereo and motion parallax result in a set of models that describe the data accurately, demonstrating the effectiveness of a view-based approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data assimilation is predominantly used for state estimation; combining observational data with model predictions to produce an updated model state that most accurately approximates the true system state whilst keeping the model parameters fixed. This updated model state is then used to initiate the next model forecast. Even with perfect initial data, inaccurate model parameters will lead to the growth of prediction errors. To generate reliable forecasts we need good estimates of both the current system state and the model parameters. This paper presents research into data assimilation methods for morphodynamic model state and parameter estimation. First, we focus on state estimation and describe implementation of a three dimensional variational(3D-Var) data assimilation scheme in a simple 2D morphodynamic model of Morecambe Bay, UK. The assimilation of observations of bathymetry derived from SAR satellite imagery and a ship-borne survey is shown to significantly improve the predictive capability of the model over a 2 year run. Here, the model parameters are set by manual calibration; this is laborious and is found to produce different parameter values depending on the type and coverage of the validation dataset. The second part of this paper considers the problem of model parameter estimation in more detail. We explain how, by employing the technique of state augmentation, it is possible to use data assimilation to estimate uncertain model parameters concurrently with the model state. This approach removes inefficiencies associated with manual calibration and enables more effective use of observational data. We outline the development of a novel hybrid sequential 3D-Var data assimilation algorithm for joint state-parameter estimation and demonstrate its efficacy using an idealised 1D sediment transport model. The results of this study are extremely positive and suggest that there is great potential for the use of data assimilation-based state-parameter estimation in coastal morphodynamic modelling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivation: Modelling the 3D structures of proteins can often be enhanced if more than one fold template is used during the modelling process. However, in many cases, this may also result in poorer model quality for a given target or alignment method. There is a need for modelling protocols that can both consistently and significantly improve 3D models and provide an indication of when models might not benefit from the use of multiple target-template alignments. Here, we investigate the use of both global and local model quality prediction scores produced by ModFOLDclust2, to improve the selection of target-template alignments for the construction of multiple-template models. Additionally, we evaluate clustering the resulting population of multi- and single-template models for the improvement of our IntFOLD-TS tertiary structure prediction method. Results: We find that using accurate local model quality scores to guide alignment selection is the most consistent way to significantly improve models for each of the sequence to structure alignment methods tested. In addition, using accurate global model quality for re-ranking alignments, prior to selection, further improves the majority of multi-template modelling methods tested. Furthermore, subsequent clustering of the resulting population of multiple-template models significantly improves the quality of selected models compared with the previous version of our tertiary structure prediction method, IntFOLD-TS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is often assumed that humans generate a 3D reconstruction of the environment, either in egocentric or world-based coordinates, but the steps involved are unknown. Here, we propose two reconstruction-based models, evaluated using data from two tasks in immersive virtual reality. We model the observer’s prediction of landmark location based on standard photogrammetric methods and then combine location predictions to compute likelihood maps of navigation behaviour. In one model, each scene point is treated independently in the reconstruction; in the other, the pertinent variable is the spatial relationship between pairs of points. Participants viewed a simple environment from one location, were transported (virtually) to another part of the scene and were asked to navigate back. Error distributions varied substantially with changes in scene layout; we compared these directly with the likelihood maps to quantify the success of the models. We also measured error distributions when participants manipulated the location of a landmark to match the preceding interval, providing a direct test of the landmark-location stage of the navigation models. Models such as this, which start with scenes and end with a probabilistic prediction of behaviour, are likely to be increasingly useful for understanding 3D vision.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Atmospheric dust is an important feedback in the climate system, potentially affecting the radiative balance and chemical composition of the atmosphere and providing nutrients to terrestrial and marine ecosystems. Yet the potential impact of dust on the climate system, both in the anthropogenically disturbed future and the naturally varying past, remains to be quantified. The geologic record of dust provides the opportunity to test earth system models designed to simulate dust. Records of dust can be obtained from ice cores, marine sediments, and terrestrial (loess) deposits. Although rarely unequivocal, these records document a variety of processes (source, transport and deposition) in the dust cycle, stored in each archive as changes in clay mineralogy, isotopes, grain size, and concentration of terrigenous materials. Although the extraction of information from each type of archive is slightly different, the basic controls on these dust indicators are the same. Changes in the dust flux and particle size might be controlled by a combination of (a) source area extent, (b) dust emission efficiency (wind speed) and atmospheric transport, (c) atmospheric residence time of dust, and/or (d) relative contributions of dry settling and rainout of dust. Similarly, changes in mineralogy reflect (a) source area mineralogy and weathering and (b) shifts in atmospheric transport. The combination of these geological data with process-based, forward-modelling schemes in global earth system models provides an excellent means of achieving a comprehensive picture of the global pattern of dust accumulation rates, their controlling mechanisms, and how those mechanisms may vary regionally. The Dust Indicators and Records of Terrestrial and MArine Palaeoenvironments (DIRTMAP) data base has been established to provide a global palaeoenvironmental data set that can be used to validate earth system model simulations of the dust cycle over the past 150,000 years.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Facility management (FM), from a service oriented approach, addresses the functions and requirements of different services such as energy management, space planning and security service. Different service requires different information to meet the needs arising from the service. Object-based Building Information Modelling (BIM) is limited to support FM services; though this technology is able to generate 3D models that semantically represent facility’s information dynamically over the lifecycle of a building. This paper presents a semiotics-inspired framework to extend BIM from a service-oriented perspective. The extended BIM, which specifies FM services and required information, will be able to express building service information in the right format for the right purposes. The service oriented approach concerns pragmatic aspect of building’s information beyond semantic level. The pragmatics defines and provides context for utilisation of building’s information. Semiotics theory adopted in this paper is to address pragmatic issues of utilisation of BIM for FM services.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

IntFOLD is an independent web server that integrates our leading methods for structure and function prediction. The server provides a simple unified interface that aims to make complex protein modelling data more accessible to life scientists. The server web interface is designed to be intuitive and integrates a complex set of quantitative data, so that 3D modelling results can be viewed on a single page and interpreted by non-expert modellers at a glance. The only required input to the server is an amino acid sequence for the target protein. Here we describe major performance and user interface updates to the server, which comprises an integrated pipeline of methods for: tertiary structure prediction, global and local 3D model quality assessment, disorder prediction, structural domain prediction, function prediction and modelling of protein-ligand interactions. The server has been independently validated during numerous CASP (Critical Assessment of Techniques for Protein Structure Prediction) experiments, as well as being continuously evaluated by the CAMEO (Continuous Automated Model Evaluation) project. The IntFOLD server is available at: http://www.reading.ac.uk/bioinf/IntFOLD/

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The challenge of moving past the classic Window Icons Menus Pointer (WIMP) interface, i.e. by turning it ‘3D’, has resulted in much research and development. To evaluate the impact of 3D on the ‘finding a target picture in a folder’ task, we built a 3D WIMP interface that allowed the systematic manipulation of visual depth, visual aides, semantic category distribution of targets versus non-targets; and the detailed measurement of lower-level stimuli features. Across two separate experiments, one large sample web-based experiment, to understand associations, and one controlled lab environment, using eye tracking to understand user focus, we investigated how visual depth, use of visual aides, use of semantic categories, and lower-level stimuli features (i.e. contrast, colour and luminance) impact how successfully participants are able to search for, and detect, the target image. Moreover in the lab-based experiment, we captured pupillometry measurements to allow consideration of the influence of increasing cognitive load as a result of either an increasing number of items on the screen, or due to the inclusion of visual depth. Our findings showed that increasing the visible layers of depth, and inclusion of converging lines, did not impact target detection times, errors, or failure rates. Low-level features, including colour, luminance, and number of edges, did correlate with differences in target detection times, errors, and failure rates. Our results also revealed that semantic sorting algorithms significantly decreased target detection times. Increased semantic contrasts between a target and its neighbours correlated with an increase in detection errors. Finally, pupillometric data did not provide evidence of any correlation between the number of visible layers of depth and pupil size, however, using structural equation modelling, we demonstrated that cognitive load does influence detection failure rates when there is luminance contrasts between the target and its surrounding neighbours. Results suggest that WIMP interaction designers should consider stimulus-driven factors, which were shown to influence the efficiency with which a target icon can be found in a 3D WIMP interface.