27 resultados para 3D representation method
Resumo:
We present a novel algorithm for joint state-parameter estimation using sequential three dimensional variational data assimilation (3D Var) and demonstrate its application in the context of morphodynamic modelling using an idealised two parameter 1D sediment transport model. The new scheme combines a static representation of the state background error covariances with a flow dependent approximation of the state-parameter cross-covariances. For the case presented here, this involves calculating a local finite difference approximation of the gradient of the model with respect to the parameters. The new method is easy to implement and computationally inexpensive to run. Experimental results are positive with the scheme able to recover the model parameters to a high level of accuracy. We expect that there is potential for successful application of this new methodology to larger, more realistic models with more complex parameterisations.
Resumo:
*** Purpose – Computer tomography (CT) for 3D reconstruction entails a huge number of coplanar fan-beam projections for each of a large number of 2D slice images, and excessive radiation intensities and dosages. For some applications its rate of throughput is also inadequate. A technique for overcoming these limitations is outlined. *** Design methodology/approach – A novel method to reconstruct 3D surface models of objects is presented, using, typically, ten, 2D projective images. These images are generated by relative motion between this set of objects and a set of ten fanbeam X-ray sources and sensors, with their viewing axes suitably distributed in 2D angular space. *** Findings – The method entails a radiation dosage several orders of magnitude lower than CT, and requires far less computational power. Experimental results are given to illustrate the capability of the technique *** Practical implications – The substantially lower cost of the method and, more particularly, its dramatically lower irradiation make it relevant to many applications precluded by current techniques *** Originality/value – The method can be used in many applications such as aircraft hold-luggage screening, 3D industrial modelling and measurement, and it should also have important applications to medical diagnosis and surgery.
Resumo:
The IntFOLD-TS method was developed according to the guiding principle that the model quality assessment would be the most critical stage for our template based modelling pipeline. Thus, the IntFOLD-TS method firstly generates numerous alternative models, using in-house versions of several different sequence-structure alignment methods, which are then ranked in terms of global quality using our top performing quality assessment method – ModFOLDclust2. In addition to the predicted global quality scores, the predictions of local errors are also provided in the resulting coordinate files, using scores that represent the predicted deviation of each residue in the model from the equivalent residue in the native structure. The IntFOLD-TS method was found to generate high quality 3D models for many of the CASP9 targets, whilst also providing highly accurate predictions of their per-residue errors. This important information may help to make the 3D models that are produced by the IntFOLD-TS method more useful for guiding future experimental work
Resumo:
Motivation: The ability of a simple method (MODCHECK) to determine the sequence–structure compatibility of a set of structural models generated by fold recognition is tested in a thorough benchmark analysis. Four Model Quality Assessment Programs (MQAPs) were tested on 188 targets from the latest LiveBench-9 automated structure evaluation experiment. We systematically test and evaluate whether the MQAP methods can successfully detect native-likemodels. Results: We show that compared with the other three methods tested MODCHECK is the most reliable method for consistently performing the best top model selection and for ranking the models. In addition, we show that the choice of model similarity score used to assess a model's similarity to the experimental structure can influence the overall performance of these tools. Although these MQAP methods fail to improve the model selection performance for methods that already incorporate protein three dimension (3D) structural information, an improvement is observed for methods that are purely sequence-based, including the best profile–profile methods. This suggests that even the best sequence-based fold recognition methods can still be improved by taking into account the 3D structural information.
Resumo:
The ever increasing demand for high image quality requires fast and efficient methods for noise reduction. The best-known order-statistics filter is the median filter. A method is presented to calculate the median on a set of N W-bit integers in W/B time steps. Blocks containing B-bit slices are used to find B-bits of the median; using a novel quantum-like representation allowing the median to be computed in an accelerated manner compared to the best-known method (W time steps). The general method allows a variety of designs to be synthesised systematically. A further novel architecture to calculate the median for a moving set of N integers is also discussed.
Resumo:
Motivation: Modelling the 3D structures of proteins can often be enhanced if more than one fold template is used during the modelling process. However, in many cases, this may also result in poorer model quality for a given target or alignment method. There is a need for modelling protocols that can both consistently and significantly improve 3D models and provide an indication of when models might not benefit from the use of multiple target-template alignments. Here, we investigate the use of both global and local model quality prediction scores produced by ModFOLDclust2, to improve the selection of target-template alignments for the construction of multiple-template models. Additionally, we evaluate clustering the resulting population of multi- and single-template models for the improvement of our IntFOLD-TS tertiary structure prediction method. Results: We find that using accurate local model quality scores to guide alignment selection is the most consistent way to significantly improve models for each of the sequence to structure alignment methods tested. In addition, using accurate global model quality for re-ranking alignments, prior to selection, further improves the majority of multi-template modelling methods tested. Furthermore, subsequent clustering of the resulting population of multiple-template models significantly improves the quality of selected models compared with the previous version of our tertiary structure prediction method, IntFOLD-TS.
Resumo:
Adequate contact with the soil is essential for water and nutrient adsorption by plant roots, but the determination of root–soil contact is a challenging task because it is difficult to visualize roots in situ and quantify their interactions with the soil at the scale of micrometres. A method to determine root–soil contact using X-ray microtomography was developed. Contact areas were determined from 3D volumetric images using segmentation and iso-surface determination tools. The accuracy of the method was tested with physical model systems of contact between two objects (phantoms). Volumes, surface areas and contact areas calculated from the measured phantoms were compared with those estimated from image analysis. The volume was accurate to within 0.3%, the surface area to within 2–4%, and the contact area to within 2.5%. Maize and lupin roots were grown in soil (<2 mm) and vermiculite at matric potentials of −0.03 and −1.6 MPa and in aggregate fractions of 4–2, 2–1, 1–0.5 and < 0.5 mm at a matric potential of −0.03 MPa. The contact of the roots with their growth medium was determined from 3D volumetric images. Macroporosity (>70 µm) of the soil sieved to different aggregate fractions was calculated from binarized data. Root-soil contact was greater in soil than in vermiculite and increased with decreasing aggregate or particle size. The differences in root–soil contact could not be explained solely by the decrease in porosity with decreasing aggregate size but may also result from changes in particle and aggregate packing around the root.
Resumo:
In this paper ensembles of forecasts (of up to six hours) are studied from a convection-permitting model with a representation of model error due to unresolved processes. The ensemble prediction system (EPS) used is an experimental convection-permitting version of the UK Met Office’s 24- member Global and Regional Ensemble Prediction System (MOGREPS). The method of representing model error variability, which perturbs parameters within the model’s parameterisation schemes, has been modified and we investigate the impact of applying this scheme in different ways. These are: a control ensemble where all ensemble members have the same parameter values; an ensemble where the parameters are different between members, but fixed in time; and ensembles where the parameters are updated randomly every 30 or 60 min. The choice of parameters and their ranges of variability have been determined from expert opinion and parameter sensitivity tests. A case of frontal rain over the southern UK has been chosen, which has a multi-banded rainfall structure. The consequences of including model error variability in the case studied are mixed and are summarised as follows. The multiple banding, evident in the radar, is not captured for any single member. However, the single band is positioned in some members where a secondary band is present in the radar. This is found for all ensembles studied. Adding model error variability with fixed parameters in time does increase the ensemble spread for near-surface variables like wind and temperature, but can actually decrease the spread of the rainfall. Perturbing the parameters periodically throughout the forecast does not further increase the spread and exhibits “jumpiness” in the spread at times when the parameters are perturbed. Adding model error variability gives an improvement in forecast skill after the first 2–3 h of the forecast for near-surface temperature and relative humidity. For precipitation skill scores, adding model error variability has the effect of improving the skill in the first 1–2 h of the forecast, but then of reducing the skill after that. Complementary experiments were performed where the only difference between members was the set of parameter values (i.e. no initial condition variability). The resulting spread was found to be significantly less than the spread from initial condition variability alone.
Resumo:
Svalgaard (2014) has recently pointed out that the calibration of the Helsinki magnetic observatory’s H component variometer was probably in error in published data for the years 1866–1874.5 and that this makes the interdiurnal variation index based on daily means, IDV(1d), (Lockwood et al., 2013a), and the interplanetary magnetic field strength derived from it (Lockwood et al., 2013b), too low around the peak of solar cycle 11. We use data from the modern Nurmijarvi station, relatively close to the site of the original Helsinki Observatory, to confirm a 30% underestimation in this interval and hence our results are fully consistent with the correction derived by Svalgaard. We show that the best method for recalibration uses the Helsinki Ak(H) and aa indices and is accurate to ±10 %. This makes it preferable to recalibration using either the sunspot number or the diurnal range of geomagnetic activity which we find to be accurate to ±20 %. In the case of Helsinki data during cycle 11, the two recalibration methods produce very similar corrections which are here confirmed using newly digitised data from the nearby St Petersburg observatory and also using declination data from Helsinki. However, we show that the IDV index is, compared to later years, too similar to sunspot number before 1872, revealing independence of the two data series has been lost; either because the geomagnetic data used to compile IDV has been corrected using sunspot numbers, or vice versa, or both. We present corrected data sequences for both the IDV(1d) index and the reconstructed IMF (interplanetary magnetic field).We also analyse the relationship between the derived near-Earth IMF and the sunspot number and point out the relevance of the prior history of solar activity, in addition to the contemporaneous value, to estimating any “floor” value of the near-Earth interplanetary field.
Resumo:
This paper presents a software-based study of a hardware-based non-sorting median calculation method on a set of integer numbers. The method divides the binary representation of each integer element in the set into bit slices in order to find the element located in the middle position. The method exhibits a linear complexity order and our analysis shows that the best performance in execution time is obtained when slices of 4-bit in size are used for 8-bit and 16-bit integers, in mostly any data set size. Results suggest that software implementation of bit slice method for median calculation outperforms sorting-based methods with increasing improvement for larger data set size. For data set sizes of N > 5, our simulations show an improvement of at least 40%.
Resumo:
We present a novel method for retrieving high-resolution, three-dimensional (3-D) nonprecipitating cloud fields in both overcast and broken-cloud situations. The method uses scanning cloud radar and multiwavelength zenith radiances to obtain gridded 3-D liquid water content (LWC) and effective radius (re) and 2-D column mean droplet number concentration (Nd). By using an adaption of the ensemble Kalman filter, radiances are used to constrain the optical properties of the clouds using a forward model that employs full 3-D radiative transfer while also providing full error statistics given the uncertainty in the observations. To evaluate the new method, we first perform retrievals using synthetic measurements from a challenging cumulus cloud field produced by a large-eddy simulation snapshot. Uncertainty due to measurement error in overhead clouds is estimated at 20% in LWC and 6% in re, but the true error can be greater due to uncertainties in the assumed droplet size distribution and radiative transfer. Over the entire domain, LWC and re are retrieved with average error 0.05–0.08 g m-3 and ~2 μm, respectively, depending on the number of radiance channels used. The method is then evaluated using real data from the Atmospheric Radiation Measurement program Mobile Facility at the Azores. Two case studies are considered, one stratocumulus and one cumulus. Where available, the liquid water path retrieved directly above the observation site was found to be in good agreement with independent values obtained from microwave radiometer measurements, with an error of 20 g m-2.
Resumo:
This study describes a simple technique that improves a recently developed 3D sub-diffraction imaging method based on three-photon absorption of commercially available quantum dots. The method combines imaging of biological samples via tri-exciton generation in quantum dots with deconvolution and spectral multiplexing, resulting in a novel approach for multi-color imaging of even thick biological samples at a 1.4 to 1.9-fold better spatial resolution. This approach is realized on a conventional confocal microscope equipped with standard continuous-wave lasers. We demonstrate the potential of multi-color tri-exciton imaging of quantum dots combined with deconvolution on viral vesicles in lentivirally transduced cells as well as intermediate filaments in three-dimensional clusters of mouse-derived neural stem cells (neurospheres) and dense microtubuli arrays in myotubes formed by stacks of differentiated C2C12 myoblasts.