862 resultados para Image-based control


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistical models have been recently introduced in computational orthopaedics to investigate the bone mechanical properties across several populations. A fundamental aspect for the construction of statistical models concerns the establishment of accurate anatomical correspondences among the objects of the training dataset. Various methods have been proposed to solve this problem such as mesh morphing or image registration algorithms. The objective of this study is to compare a mesh-based and an image-based statistical appearance model approaches for the creation of nite element(FE) meshes. A computer tomography (CT) dataset of 157 human left femurs was used for the comparison. For each approach, 30 finite element meshes were generated with the models. The quality of the obtained FE meshes was evaluated in terms of volume, size and shape of the elements. Results showed that the quality of the meshes obtained with the image-based approach was higher than the quality of the mesh-based approach. Future studies are required to evaluate the impact of this finding on the final mechanical simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Decision trees have been proposed as a basis for modifying table based injection to reduce transient particulate spikes during the turbocharger lag period. It has been shown that decision trees can detect particulate spikes in real time. In well calibrated electronically controlled diesel engines these spikes are narrow and are encompassed by a wider NOx spike. Decision trees have been shown to pinpoint the exact location of measured opacity spikes in real time thus enabling targeted PM reduction with near zero NOx penalty. A calibrated dimensional model has been used to demonstrate the possible reduction of particulate matter with targeted injection pressure pulses. Post injection strategy optimized for near stoichiometric combustion has been shown to provide additional benefits. Empirical models have been used to calculate emission tradeoffs over the entire FTP cycle. An empirical model based transient calibration has been used to demonstrate that such targeted transient modifiers are more beneficial at lower engine-out NOx levels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Investigation uses simulation to explore the inherent tradeoffs ofcontrolling high-speed and highly robust walking robots while minimizing energy consumption. Using a novel controller which optimizes robustness, energy economy, and speed of a simulated robot on rough terrain, the user can adjust their priorities between these three outcome measures and systematically generate a performance curveassessing the tradeoffs associated with these metrics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Short-acting agents for neuromuscular block (NMB) require frequent dosing adjustments for individual patient's needs. In this study, we verified a new closed-loop controller for mivacurium dosing in clinical trials. METHODS: Fifteen patients were studied. T1% measured with electromyography was used as input signal for the model-based controller. After induction of propofol/opiate anaesthesia, stabilization of baseline electromyography signal was awaited and a bolus of 0.3 mg kg-1 mivacurium was then administered to facilitate endotracheal intubation. Closed-loop infusion was started thereafter, targeting a neuromuscular block of 90%. Setpoint deviation, the number of manual interventions and surgeon's complaints were recorded. Drug use and its variability between and within patients were evaluated. RESULTS: Median time of closed-loop control for the 11 patients included in the data processing was 135 [89-336] min (median [range]). Four patients had to be excluded because of sensor problems. Mean absolute deviation from setpoint was 1.8 +/- 0.9 T1%. Neither manual interventions nor complaints from the surgeons were recorded. Mean necessary mivacurium infusion rate was 7.0 +/- 2.2 microg kg-1 min-1. Intrapatient variability of mean infusion rates over 30-min interval showed high differences up to a factor of 1.8 between highest and lowest requirement in the same patient. CONCLUSIONS: Neuromuscular block can precisely be controlled with mivacurium using our model-based controller. The amount of mivacurium needed to maintain T1% at defined constant levels differed largely between and within patients. Closed-loop control seems therefore advantageous to automatically maintain neuromuscular block at constant levels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Invasive plant species threaten natural areas by reducing biodiversity and altering ecosystem functions. They also impact agriculture by reducing crop and livestock productivity. Millions of dollars are spent on invasive species control each year, and traditionally, herbicides are used to manage invasive species. Herbicides have human and environmental health risks associated with them; therefore, it is essential that land managers and stakeholders attempt to reduce these risks by utilizing the principles of integrated weed management. Integrated weed management is a practice that incorporates a variety of measures and focuses on the ecology of the invasive plant to manage it. Roadways are high risk areas that have high incidence of invasive species. Roadways act as conduits for invasive species spread and are ideal harborages for population growth; therefore, roadways should be a primary target for invasive species control. There are four stages in the invasion process which an invasive species must overcome: transport, establishment, spread, and impact. The aim of this dissertation was to focus on these four stages and examine the mechanisms underlying the progression from one stage to the next, while also developing integrated weed management strategies. The target species were Phragmites australis, common reed, and Cisrium arvense, Canada thistle. The transport and establishment risks of P. australis can be reduced by removing rhizome fragments from soil when roadside maintenance is performed. The establishment and spread of C. arvense can be reduced by planting particular resistant species, e.g. Heterotheca villosa, especially those that can reduce light transmittance to the soil. Finally, the spread and impact of C. arvense can be mitigated on roadsides through the use of the herbicide aminopyralid. The risks associated with herbicide drift produced by application equipment can be reduced by using the Wet-Blade herbicide application system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Image-based Relighting (IBRL) has recently attracted a lot of research interest for its ability to relight real objects or scenes, from novel illuminations captured in natural/synthetic environments. Complex lighting effects such as subsurface scattering, interreflection, shadowing, mesostructural self-occlusion, refraction and other relevant phenomena can be generated using IBRL. The main advantage of image-based graphics is that the rendering time is independent of scene complexity as the rendering is actually a process of manipulating image pixels, instead of simulating light transport. The goal of this paper is to provide a complete and systematic overview of the research in Imagebased Relighting. We observe that essentially all IBRL techniques can be broadly classified into three categories (Fig. 9), based on how the scene/illumination information is captured: Reflectance function-based, Basis function-based and Plenoptic function-based. We discuss the characteristics of each of these categories and their representative methods. We also discuss about the sampling density and types of light source(s), relevant issues of IBRL.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In attempts to elucidate the underlying mechanisms of spinal injuries and spinal deformities, several experimental and numerical studies have been conducted to understand the biomechanical behavior of the spine. However, numerical biomechanical studies suffer from uncertainties associated with hard- and soft-tissue anatomies. Currently, these parameters are identified manually on each mesh model prior to simulations. The determination of soft connective tissues on finite element meshes can be a tedious procedure, which limits the number of models used in the numerical studies to a few instances. In order to address these limitations, an image-based method for automatic morphing of soft connective tissues has been proposed. Results showed that the proposed method is capable to accurately determine the spatial locations of predetermined bony landmarks. The present method can be used to automatically generate patient-specific models, which may be helpful in designing studies involving a large number of instances and to understand the mechanical behavior of biomechanical structures across a given population.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistical appearance models have recently been introduced in bone mechanics to investigate bone geometry and mechanical properties in population studies. The establishment of accurate anatomical correspondences is a critical aspect for the construction of reliable models. Depending on the representation of a bone as an image or a mesh, correspondences are detected using image registration or mesh morphing. The objective of this study was to compare image-based and mesh-based statistical appearance models of the femur for finite element (FE) simulations. To this aim, (i) we compared correspondence detection methods on bone surface and in bone volume; (ii) we created an image-based and a mesh-based statistical appearance models from 130 images, which we validated using compactness, representation and generalization, and we analyzed the FE results on 50 recreated bones vs. original bones; (iii) we created 1000 new instances, and we compared the quality of the FE meshes. Results showed that the image-based approach was more accurate in volume correspondence detection and quality of FE meshes, whereas the mesh-based approach was more accurate for surface correspondence detection and model compactness. Based on our results, we recommend the use of image-based statistical appearance models for FE simulations of the femur.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Life expectancy continuously increases but our society faces age-related conditions. Among musculoskeletal diseases, osteoporosis associated with risk of vertebral fracture and degenerative intervertebral disc (IVD) are painful pathologies responsible for tremendous healthcare costs. Hence, reliable diagnostic tools are necessary to plan a treatment or follow up its efficacy. Yet, radiographic and MRI techniques, respectively clinical standards for evaluation of bone strength and IVD degeneration, are unspecific and not objective. Increasingly used in biomedical engineering, CT-based finite element (FE) models constitute the state-of-art for vertebral strength prediction. However, as non-invasive biomechanical evaluation and personalised FE models of the IVD are not available, rigid boundary conditions (BCs) are applied on the FE models to avoid uncertainties of disc degeneration that might bias the predictions. Moreover, considering the impact of low back pain, the biomechanical status of the IVD is needed as a criterion for early disc degeneration. Thus, the first FE study focuses on two rigid BCs applied on the vertebral bodies during compression test of cadaver vertebral bodies, vertebral sections and PMMA embedding. The second FE study highlights the large influence of the intervertebral disc’s compliance on the vertebral strength, damage distribution and its initiation. The third study introduces a new protocol for normalisation of the IVD stiffness in compression, torsion and bending using MRI-based data to account for its morphology. In the last study, a new criterion (Otsu threshold) for disc degeneration based on quantitative MRI data (axial T2 map) is proposed. The results show that vertebral strength and damage distribution computed with rigid BCs are identical. Yet, large discrepancies in strength and damage localisation were observed when the vertebral bodies were loaded via IVDs. The normalisation protocol attenuated the effect of geometry on the IVD stiffnesses without complete suppression. Finally, the Otsu threshold computed in the posterior part of annulus fibrosus was related to the disc biomechanics and meet objectivity and simplicity required for a clinical application. In conclusion, the stiffness normalisation protocol necessary for consistent IVD comparisons and the relation found between degeneration, mechanical response of the IVD and Otsu threshold lead the way for non-invasive evaluation biomechanical status of the IVD. As the FE prediction of vertebral strength is largely influenced by the IVD conditions, this data could also improve the future FE models of osteoporotic vertebra.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several types of parallelism can be exploited in logic programs while preserving correctness and efficiency, i.e. ensuring that the parallel execution obtains the same results as the sequential one and the amount of work performed is not greater. However, such results do not take into account a number of overheads which appear in practice, such as process creation and scheduling, which can induce a slow-down, or, at least, limit speedup, if they are not controlled in some way. This paper describes a methodology whereby the granularity of parallel tasks, i.e. the work available under them, is efficiently estimated and used to limit parallelism so that the effect of such overheads is controlled. The run-time overhead associated with the approach is usually quite small, since as much work is done at compile time as possible. Also,a number of run-time optimizations are proposed. Moreover, a static analysis of the overhead associated with the granularity control process is performed in order to decide its convenience. The performance improvements resulting from the incorporation of grain size control are shown to be quite good, specially for systems with medium to large parallel execution overheads.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several types of parallelism can be exploited in logic programs while preserving correctness and efficiency, i.e. ensuring that the parallel execution obtains the same results as the sequential one and the amount of work performed is not greater. However, such results do not take into account a number of overheads which appear in practice, such as process creation and scheduling, which can induce a slow-down, or, at least, limit speedup, if they are not controlled in some way. This paper describes a methodology whereby the granularity of parallel tasks, i.e. the work available under them, is efficiently estimated and used to limit parallelism so that the effect of such overheads is controlled. The run-time overhead associated with the approach is usually quite small, since as much work is done at compile time as possible. Also, a number of run-time optimizations are proposed. Moreover, a static analysis of the overhead associated with the granularity control process is performed in order to decide its convenience. The performance improvements resulting from the incorporation of grain size control are shown to be quite good, specially for systems with médium to large parallel execution overheads.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

These slides present several 3-D reconstruction methods to obtain the geometric structure of a scene that is viewed by multiple cameras. We focus on the combination of the geometric modeling in the image formation process with the use of standard optimization tools to estimate the characteristic parameters that describe the geometry of the 3-D scene. In particular, linear, non-linear and robust methods to estimate the monocular and epipolar geometry are introduced as cornerstones to generate 3-D reconstructions with multiple cameras. Some examples of systems that use this constructive strategy are Bundler, PhotoSynth, VideoSurfing, etc., which are able to obtain 3-D reconstructions with several hundreds or thousands of cameras. En esta presentación se tratan varios métodos de reconstrucción 3-D para la obtención de la estructura geométrica de una escena que es visualizada por varias cámaras. Se enfatiza la combinación de modelado geométrico del proceso de formación de la imagen con el uso de herramientas estándar de optimización para estimar los parámetros característicos que describen la geometría de la escena 3-D. En concreto, se presentan métodos de estimación lineales, no lineales y robustos de las geometrías monocular y epipolar como punto de partida para generar reconstrucciones con tres o más cámaras. Algunos ejemplos de sistemas que utilizan este enfoque constructivo son Bundler, PhotoSynth, VideoSurfing, etc., los cuales, en la práctica pueden llegar a reconstruir una escena con varios cientos o miles de cámaras.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Measurement of concrete strain through non-invasive methods is of great importance in civil engineering and structural analysis. Traditional methods use laser speckle and high quality cameras that may result too expensive for many applications. Here we present a method for measuring concrete deformations with a standard reflex camera and image processing for tracking objects in the concretes surface. Two different approaches are presented here. In the first one, on-purpose objects are drawn on the surface, while on the second one we track small defects on the surface due to air bubbles in the hardening process. The method has been tested on a concrete sample under several loading/unloading cycles. A stop-motion sequence of the process has been captured and analyzed. Results have been successfully compared with the values given by a strain gauge. Accuracy of our methods in tracking objects is below 8 μm, in the order of more expensive commercial devices.