13 resultados para 3D model acquisition

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Developmental neurotoxicity is a major issue in human health and may have lasting neurological implications. In this preliminary study we exposed differentiating Ntera2/clone D1 (NT2/D1) cell neurospheres to known human teratogens classed as non-embryotoxic (acrylamide), weakly embryotoxic (lithium, valproic acid) and strongly embryotoxic (hydroxyurea) as listed by European Centre for the Validation of Alternative Methods (ECVAM) and examined endpoints of cell viability and neuronal protein marker expression specific to the central nervous system, to identify developmental neurotoxins. Following induction of neuronal differentiation, valproic acid had the most significant effect on neurogenesis, in terms of reduced viability and decreased neuronal markers. Lithium had least effect on viability and did not significantly alter the expression of neuronal markers. Hydroxyurea significantly reduced cell viability but did not affect neuronal protein marker expression. Acrylamide reduced neurosphere viability but did not affect neuronal protein marker expression. Overall, this NT2/D1 -based neurosphere model of neurogenesis, may provide the basis for a model of developmental neurotoxicity in vitro.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Efficient numerical models facilitate the study and design of solid oxide fuel cells (SOFCs), stacks, and systems. Whilst the accuracy and reliability of the computed results are usually sought by researchers, the corresponding modelling complexities could result in practical difficulties regarding the implementation flexibility and computational costs. The main objective of this article is to adapt a simple but viable numerical tool for evaluation of our experimental rig. Accordingly, a model for a multi-layer SOFC surrounded by a constant temperature furnace is presented, trained and validated against experimental data. The model consists of a four-layer structure including stand, two interconnects, and PEN (Positive electrode-Electrolyte-Negative electrode); each being approximated by a lumped parameter model. The heating process through the surrounding chamber is also considered. We used a set of V-I characteristics data for parameter adjustment followed by model verification against two independent sets of data. The model results show a good agreement with practical data, offering a significant improvement compared to reduced models in which the impact of external heat loss is neglected. Furthermore, thermal analysis for adiabatic and non-adiabatic process is carried out to capture the thermal behaviour of a single cell followed by a polarisation loss assessment. Finally, model-based design of experiment is demonstrated for a case study.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper addresses the problem of obtaining complete, detailed reconstructions of textureless shiny objects. We present an algorithm which uses silhouettes of the object, as well as images obtained under changing illumination conditions. In contrast with previous photometric stereo techniques, ours is not limited to a single viewpoint but produces accurate reconstructions in full 3D. A number of images of the object are obtained from multiple viewpoints, under varying lighting conditions. Starting from the silhouettes, the algorithm recovers camera motion and constructs the object's visual hull. This is then used to recover the illumination and initialize a multiview photometric stereo scheme to obtain a closed surface reconstruction. There are two main contributions in this paper: First, we describe a robust technique to estimate light directions and intensities and, second, we introduce a novel formulation of photometric stereo which combines multiple viewpoints and, hence, allows closed surface reconstructions. The algorithm has been implemented as a practical model acquisition system. Here, a quantitative evaluation of the algorithm on synthetic data is presented together with complete reconstructions of challenging real objects. Finally, we show experimentally how, even in the case of highly textured objects, this technique can greatly improve on correspondence-based multiview stereo results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper addresses the problem of obtaining 3d detailed reconstructions of human faces in real-time and with inexpensive hardware. We present an algorithm based on a monocular multi-spectral photometric-stereo setup. This system is known to capture high-detailed deforming 3d surfaces at high frame rates and without having to use any expensive hardware or synchronized light stage. However, the main challenge of such a setup is the calibration stage, which depends on the lights setup and how they interact with the specific material being captured, in this case, human faces. For this purpose we develop a self-calibration technique where the person being captured is asked to perform a rigid motion in front of the camera, maintaining a neutral expression. Rigidity constrains are then used to compute the head's motion with a structure-from-motion algorithm. Once the motion is obtained, a multi-view stereo algorithm reconstructs a coarse 3d model of the face. This coarse model is then used to estimate the lighting parameters with a stratified approach: In the first step we use a RANSAC search to identify purely diffuse points on the face and to simultaneously estimate this diffuse reflectance model. In the second step we apply non-linear optimization to fit a non-Lambertian reflectance model to the outliers of the previous step. The calibration procedure is validated with synthetic and real data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variation of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here a new extended framework is derived that is based on a local polynomial approximation of a recently proposed variational Bayesian algorithm. The paper begins by showing that the new extension of this variational algorithm can be used for state estimation (smoothing) and converges to the original algorithm. However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new approach is validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein–Uhlenbeck process, the exact likelihood of which can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz ’63 (3D model). As a special case the algorithm is also applied to the 40 dimensional stochastic Lorenz ’96 system. In our investigation we compare this new approach with a variety of other well known methods, such as the hybrid Monte Carlo, dual unscented Kalman filter, full weak-constraint 4D-Var algorithm and analyse empirically their asymptotic behaviour as a function of observation density or length of time window increases. In particular we show that we are able to estimate parameters in both the drift (deterministic) and the diffusion (stochastic) part of the model evolution equations using our new methods.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study extends a previous research concerning intervertebral motion registration by means of 2D dynamic fluoroscopy to obtain a more comprehensive 3D description of vertebral kinematics. The problem of estimating the 3D rigid pose of a CT volume of a vertebra from its 2D X-ray fluoroscopy projection is addressed. 2D-3D registration is obtained maximising a measure of similarity between Digitally Reconstructed Radiographs (obtained from the CT volume) and real fluoroscopic projection. X-ray energy correction was performed. To assess the method a calibration model was realised a sheep dry vertebra was rigidly fixed to a frame of reference including metallic markers. Accurate measurement of 3D orientation was obtained via single-camera calibration of the markers and held as true 3D vertebra position; then, vertebra 3D pose was estimated and results compared. Error analysis revealed accuracy of the order of 0.1 degree for the rotation angles of about 1mm for displacements parallel to the fluoroscopic plane, and of order of 10mm for the orthogonal displacement. © 2010 P. Bifulco et al.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a novel framework where an initial classifier is learned by incorporating prior information extracted from an existing sentiment lexicon. Preferences on expectations of sentiment labels of those lexicon words are expressed using generalized expectation criteria. Documents classified with high confidence are then used as pseudo-labeled examples for automatical domain-specific feature acquisition. The word-class distributions of such self-learned features are estimated from the pseudo-labeled examples and are used to train another classifier by constraining the model's predictions on unlabeled instances. Experiments on both the movie review data and the multi-domain sentiment dataset show that our approach attains comparable or better performance than exiting weakly-supervised sentiment classification methods despite using no labeled documents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this research was to investigate the effects of Processing Instruction (VanPatten, 1996, 2007), as an input-based model for teaching second language grammar, on Syrian learners’ processing abilities. The present research investigated the effects of Processing Instruction on the acquisition of English relative clauses by Syrian learners in the form of a quasi-experimental design. Three separate groups were involved in the research (Processing Instruction, Traditional Instruction and a Control Group). For assessment, a pre-test, a direct post-test and a delayed post-test were used as main tools for eliciting data. A questionnaire was also distributed to participants in the Processing Instruction group to give them the opportunity to give feedback in relation to the treatment they received in comparison with the Traditional Instruction they are used to. Four hypotheses were formulated on the possible effectivity of Processing Instruction on Syrian learners’ linguistic system. It was hypothesised that Processing Instruction would improve learners’ processing abilities leading to an improvement in learners’ linguistic system. This was expected to lead to a better performance when it comes to the comprehension and production of English relative clauses. The main source of data was analysed statistically using the ANOVA test. Cohen’s d calculations were also used to support the ANOVA test. Cohen’s d showed the magnitude of effects of the three treatments. Results of the analysis showed that both Processing Instruction and Traditional Instruction groups had improved after treatment. However, the Processing Instruction Group significantly outperformed the other two groups in the comprehension of relative clauses. The analysis concluded that Processing Instruction is a useful tool for instructing relative clauses to Syrian learners. This was enhanced by participants’ responses to the questionnaire as they were in favour of Processing Instruction, rather than Traditional Instruction. This research has theoretical and pedagogical implications. Theoretically, the study showed support for the Input hypothesis. That is, it was shown that Processing Instruction had a positive effect on input processing as it affected learners’ linguistic system. This was reflected in learners’ performance where learners were able to produce a structure which they had not been asked to produce. Pedagogically, the present research showed that Processing Instruction is a useful tool for teaching English grammar in the context where the experiment was carried out, as it had a large effect on learners’ performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Animal models of acquired epilepsies aim to provide researchers with tools for use in understanding the processes underlying the acquisition, development and establishment of the disorder. Typically, following a systemic or local insult, vulnerable brain regions undergo a process leading to the development, over time, of spontaneous recurrent seizures. Many such models make use of a period of intense seizure activity or status epilepticus, and this may be associated with high mortality and/or global damage to large areas of the brain. These undesirable elements have driven improvements in the design of chronic epilepsy models, for example the lithium-pilocarpine epileptogenesis model. Here, we present an optimised model of chronic epilepsy that reduces mortality to 1% whilst retaining features of high epileptogenicity and development of spontaneous seizures. Using local field potential recordings from hippocampus in vitro as a probe, we show that the model does not result in significant loss of neuronal network function in area CA3 and, instead, subtle alterations in network dynamics appear during a process of epileptogenesis, which eventually leads to a chronic seizure state. The model’s features of very low mortality and high morbidity in the absence of global neuronal damage offer the chance to explore the processes underlying epileptogenesis in detail, in a population of animals not defined by their resistance to seizures, whilst acknowledging and being driven by the 3Rs (Replacement, Refinement and Reduction of animal use in scientific procedures) principles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Small and Medium Enterprises (SMEs) play an important part in the economy of any country. Initially, a flat management hierarchy, quick response to market changes and cost competitiveness were seen as the competitive characteristics of an SME. Recently, in developed economies, technological capabilities (TCs) management- managing existing and developing or assimilating new technological capabilities for continuous process and product innovations, has become important for both large organisations and SMEs to achieve sustained competitiveness. Therefore, various technological innovation capability (TIC) models have been developed at firm level to assess firms‘ innovation capability level. These models output help policy makers and firm managers to devise policies for deepening a firm‘s technical knowledge generation, acquisition and exploitation capabilities for sustained technological competitive edge. However, in developing countries TCs management is more of TCs upgrading: acquisitions of TCs from abroad, and then assimilating, innovating and exploiting them. Most of the TIC models for developing countries delineate the level of TIC required as firms move from the acquisition to innovative level. However, these models do not provide tools for assessing the existing level of TIC of a firm and various factors affecting TIC, to help practical interventions for TCs upgrading of firms for improved or new processes and products. Recently, the Government of Pakistan (GOP) has realised the importance of TCs upgrading in SMEs-especially export-oriented, for their sustained competitiveness. The GOP has launched various initiatives with local and foreign assistance to identify ways and means of upgrading local SMEs capabilities. This research targets this gap and developed a TICs assessment model for identifying the existing level of TIC of manufacturing SMEs existing in clusters in Sialkot, Pakistan. SME executives in three different export-oriented clusters at Sialkot were interviewed to analyse technological capabilities development initiatives (CDIs) taken by them to develop and upgrade their firms‘ TCs. Data analysed at CDI, firm, cluster and cross-cluster level first helped classify interviewed firms as leader, follower and reactor, with leader firms claiming to introduce mostly new CDIs to their cluster. Second, the data analysis displayed that mostly interviewed leader firms exhibited ‗learning by interacting‘ and ‗learning by training‘ capabilities for expertise acquisition from customers and international consultants. However, these leader firms did not show much evidence of learning by using, reverse engineering and R&D capabilities, which according to the extant literature are necessary for upgrading existing TIC level and thus TCs of firm for better value-added processes and products. The research results are supported by extant literature on Sialkot clusters. Thus, in sum, a TIC assessment model was developed in this research which qualitatively identified interviewed firms‘ TIC levels, the factors affecting them, and is validated by existing literature on interviewed Sialkot clusters. Further, the research gives policy level recommendations for TIC and thus TCs upgrading at firm and cluster level for targeting better value-added markets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper presents a 3-dimensional simulation of the effect of particle shape on char entrainment in a bubbling fluidised bed reactor. Three char particles of 350 μm side length but of different shapes (cube, sphere, and tetrahedron) are injected into the fluidised bed and the momentum transport from the fluidising gas and fluidised sand is modelled. Due to the fluidising conditions, reactor design and particle shape the char particles will either be entrained from the reactor or remain inside the bubbling bed. The sphericity of the particles is the factor that differentiates the particle motion inside the reactor and their efficient entrainment out of it. The simulation has been performed with a completely revised momentum transport model for bubble three-phase flow, taking into account the sphericity factors, and has been applied as an extension to the commercial finite volume code FLUENT 6.3. © 2010 Elsevier B.V.All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Photometric Stereo is a powerful image based 3D reconstruction technique that has recently been used to obtain very high quality reconstructions. However, in its classic form, Photometric Stereo suffers from two main limitations: Firstly, one needs to obtain images of the 3D scene under multiple different illuminations. As a result the 3D scene needs to remain static during illumination changes, which prohibits the reconstruction of deforming objects. Secondly, the images obtained must be from a single viewpoint. This leads to depth-map based 2.5 reconstructions, instead of full 3D surfaces. The aim of this Chapter is to show how these limitations can be alleviated, leading to the derivation of two practical 3D acquisition systems: The first one, based on the powerful Coloured Light Photometric Stereo method can be used to reconstruct moving objects such as cloth or human faces. The second, permits the complete 3D reconstruction of challenging objects such as porcelain vases. In addition to algorithmic details, the Chapter pays attention to practical issues such as setup calibration, detection and correction of self and cast shadows. We provide several evaluation experiments as well as reconstruction results. © 2010 Springer-Verlag Berlin Heidelberg.