998 resultados para dynamic geometric calibration
Resumo:
This study extends a previous research concerning intervertebral motion registration by means of 2D dynamic fluoroscopy to obtain a more comprehensive 3D description of vertebral kinematics. The problem of estimating the 3D rigid pose of a CT volume of a vertebra from its 2D X-ray fluoroscopy projection is addressed. 2D-3D registration is obtained maximising a measure of similarity between Digitally Reconstructed Radiographs (obtained from the CT volume) and real fluoroscopic projection. X-ray energy correction was performed. To assess the method a calibration model was realised a sheep dry vertebra was rigidly fixed to a frame of reference including metallic markers. Accurate measurement of 3D orientation was obtained via single-camera calibration of the markers and held as true 3D vertebra position; then, vertebra 3D pose was estimated and results compared. Error analysis revealed accuracy of the order of 0.1 degree for the rotation angles of about 1mm for displacements parallel to the fluoroscopic plane, and of order of 10mm for the orthogonal displacement. © 2010 P. Bifulco et al.
Resumo:
Current reform initiatives recommend that geometry instruction include the study of three-dimensional geometric objects and provide students with opportunities to use spatial skills in problem-solving tasks. Geometer's Sketchpad (GSP) is a dynamic and interactive computer program that enables the user to investigate and explore geometric concepts and manipulate geometric structures. Research using GSP as an instructional tool has focused primarily on teaching and learning two-dimensional geometry. This study explored the effect of a GSP based instructional environment on students' geometric thinking and three-dimensional spatial ability as they used GSP to learn three-dimensional geometry. For 10 weeks, 18 tenth-grade students from an urban school district used GSP to construct and analyze dynamic, two-dimensional representations of three-dimensional objects in a classroom environment that encouraged exploration, discussion, conjecture, and verification. The data were collected primarily from participant observations and clinical interviews and analyzed using qualitative methods of analysis. In addition, pretest and posttest measures of three-dimensional spatial ability and van Hiele level of geometric thinking were obtained. Spatial ability measures were analyzed using standard t-test analysis. ^ The data from this study indicate that GSP is a viable tool to teach students about three-dimensional geometric objects. A comparison of students' pretest and posttest van Hiele levels showed an improvement in geometric thinking, especially for students on lower levels of the van Hiele theory. Evidence at the p < .05 level indicated that students' spatial ability improved significantly. Specifically, the GSP dynamic, visual environment supported students' visualization and reasoning processes as students attempted to solve challenging tasks about three-dimensional geometric objects. The GSP instructional activities also provided students with an experiential base and an intuitive understanding about three-dimensional objects from which more formal work in geometry could be pursued. This study demonstrates that by designing appropriate GSP based instructional environments, it is possible to help students improve their spatial skills, develop more coherent and accurate intuitions about three-dimensional geometric objects, and progress through the levels of geometric thinking proposed by van Hiele. ^
Resumo:
Managed lane strategies are innovative road operation schemes for addressing congestion problems. These strategies operate a lane (lanes) adjacent to a freeway that provides congestion-free trips to eligible users, such as transit or toll-payers. To ensure the successful implementation of managed lanes, the demand on these lanes need to be accurately estimated. Among different approaches for predicting this demand, the four-step demand forecasting process is most common. Managed lane demand is usually estimated at the assignment step. Therefore, the key to reliably estimating the demand is the utilization of effective assignment modeling processes. ^ Managed lanes are particularly effective when the road is functioning at near-capacity. Therefore, capturing variations in demand and network attributes and performance is crucial for their modeling, monitoring and operation. As a result, traditional modeling approaches, such as those used in static traffic assignment of demand forecasting models, fail to correctly predict the managed lane demand and the associated system performance. The present study demonstrates the power of the more advanced modeling approach of dynamic traffic assignment (DTA), as well as the shortcomings of conventional approaches, when used to model managed lanes in congested environments. In addition, the study develops processes to support an effective utilization of DTA to model managed lane operations. ^ Static and dynamic traffic assignments consist of demand, network, and route choice model components that need to be calibrated. These components interact with each other, and an iterative method for calibrating them is needed. In this study, an effective standalone framework that combines static demand estimation and dynamic traffic assignment has been developed to replicate real-world traffic conditions. ^ With advances in traffic surveillance technologies collecting, archiving, and analyzing traffic data is becoming more accessible and affordable. The present study shows how data from multiple sources can be integrated, validated, and best used in different stages of modeling and calibration of managed lanes. Extensive and careful processing of demand, traffic, and toll data, as well as proper definition of performance measures, result in a calibrated and stable model, which closely replicates real-world congestion patterns, and can reasonably respond to perturbations in network and demand properties.^
Resumo:
With the popularization of GPS-enabled devices such as mobile phones, location data are becoming available at an unprecedented scale. The locations may be collected from many different sources such as vehicles moving around a city, user check-ins in social networks, and geo-tagged micro-blogging photos or messages. Besides the longitude and latitude, each location record may also have a timestamp and additional information such as the name of the location. Time-ordered sequences of these locations form trajectories, which together contain useful high-level information about people's movement patterns.
The first part of this thesis focuses on a few geometric problems motivated by the matching and clustering of trajectories. We first give a new algorithm for computing a matching between a pair of curves under existing models such as dynamic time warping (DTW). The algorithm is more efficient than standard dynamic programming algorithms both theoretically and practically. We then propose a new matching model for trajectories that avoids the drawbacks of existing models. For trajectory clustering, we present an algorithm that computes clusters of subtrajectories, which correspond to common movement patterns. We also consider trajectories of check-ins, and propose a statistical generative model, which identifies check-in clusters as well as the transition patterns between the clusters.
The second part of the thesis considers the problem of covering shortest paths in a road network, motivated by an EV charging station placement problem. More specifically, a subset of vertices in the road network are selected to place charging stations so that every shortest path contains enough charging stations and can be traveled by an EV without draining the battery. We first introduce a general technique for the geometric set cover problem. This technique leads to near-linear-time approximation algorithms, which are the state-of-the-art algorithms for this problem in either running time or approximation ratio. We then use this technique to develop a near-linear-time algorithm for this
shortest-path cover problem.
Resumo:
The dynamic interaction of vehicles and bridges results in live loads being induced into bridges that are greater than the vehicle’s static weight. To limit this dynamic effect, the Iowa Department of Transportation (DOT) currently requires that permitted trucks slow to five miles per hour and span the roadway centerline when crossing bridges. However, this practice has other negative consequences such as the potential for crashes, impracticality for bridges with high traffic volumes, and higher fuel consumption. The main objective of this work was to provide information and guidance on the allowable speeds for permitted vehicles and loads on bridges .A field test program was implemented on five bridges (i.e., two steel girder bridges, two pre-stressed concrete girder bridges, and one concrete slab bridge) to investigate the dynamic response of bridges due to vehicle loadings. The important factors taken into account during the field tests included vehicle speed, entrance conditions, vehicle characteristics (i.e., empty dump truck, full dump truck, and semi-truck), and bridge geometric characteristics (i.e., long span and short span). Three entrance conditions were used: As-is and also Level 1 and Level 2, which simulated rough entrance conditions with a fabricated ramp placed 10 feet from the joint between the bridge end and approach slab and directly next to the joint, respectively. The researchers analyzed and utilized the field data to derive the dynamic impact factors (DIFs) for all gauges installed on each bridge under the different loading scenarios.
Resumo:
This dissertation presents the design of three high-performance successive-approximation-register (SAR) analog-to-digital converters (ADCs) using distinct digital background calibration techniques under the framework of a generalized code-domain linear equalizer. These digital calibration techniques effectively and efficiently remove the static mismatch errors in the analog-to-digital (A/D) conversion. They enable aggressive scaling of the capacitive digital-to-analog converter (DAC), which also serves as sampling capacitor, to the kT/C limit. As a result, outstanding conversion linearity, high signal-to-noise ratio (SNR), high conversion speed, robustness, superb energy efficiency, and minimal chip-area are accomplished simultaneously. The first design is a 12-bit 22.5/45-MS/s SAR ADC in 0.13-μm CMOS process. It employs a perturbation-based calibration based on the superposition property of linear systems to digitally correct the capacitor mismatch error in the weighted DAC. With 3.0-mW power dissipation at a 1.2-V power supply and a 22.5-MS/s sample rate, it achieves a 71.1-dB signal-to-noise-plus-distortion ratio (SNDR), and a 94.6-dB spurious free dynamic range (SFDR). At Nyquist frequency, the conversion figure of merit (FoM) is 50.8 fJ/conversion step, the best FoM up to date (2010) for 12-bit ADCs. The SAR ADC core occupies 0.06 mm2, while the estimated area the calibration circuits is 0.03 mm2. The second proposed digital calibration technique is a bit-wise-correlation-based digital calibration. It utilizes the statistical independence of an injected pseudo-random signal and the input signal to correct the DAC mismatch in SAR ADCs. This idea is experimentally verified in a 12-bit 37-MS/s SAR ADC fabricated in 65-nm CMOS implemented by Pingli Huang. This prototype chip achieves a 70.23-dB peak SNDR and an 81.02-dB peak SFDR, while occupying 0.12-mm2 silicon area and dissipating 9.14 mW from a 1.2-V supply with the synthesized digital calibration circuits included. The third work is an 8-bit, 600-MS/s, 10-way time-interleaved SAR ADC array fabricated in 0.13-μm CMOS process. This work employs an adaptive digital equalization approach to calibrate both intra-channel nonlinearities and inter-channel mismatch errors. The prototype chip achieves 47.4-dB SNDR, 63.6-dB SFDR, less than 0.30-LSB differential nonlinearity (DNL), and less than 0.23-LSB integral nonlinearity (INL). The ADC array occupies an active area of 1.35 mm2 and dissipates 30.3 mW, including synthesized digital calibration circuits and an on-chip dual-loop delay-locked loop (DLL) for clock generation and synchronization.
Resumo:
The goal of this project is to learn the necessary steps to create a finite element model, which can accurately predict the dynamic response of a Kohler Engines Heavy Duty Air Cleaner (HDAC). This air cleaner is composed of three glass reinforced plastic components and two air filters. Several uncertainties arose in the finite element (FE) model due to the HDAC’s component material properties and assembly conditions. To help understand and mitigate these uncertainties, analytical and experimental modal models were created concurrently to perform a model correlation and calibration. Over the course of the project simple and practical methods were found for future FE model creation. Similarly, an experimental method for the optimal acquisition of experimental modal data was arrived upon. After the model correlation and calibration was performed a validation experiment was used to confirm the FE models predictive capabilities.
Resumo:
Managed lane strategies are innovative road operation schemes for addressing congestion problems. These strategies operate a lane (lanes) adjacent to a freeway that provides congestion-free trips to eligible users, such as transit or toll-payers. To ensure the successful implementation of managed lanes, the demand on these lanes need to be accurately estimated. Among different approaches for predicting this demand, the four-step demand forecasting process is most common. Managed lane demand is usually estimated at the assignment step. Therefore, the key to reliably estimating the demand is the utilization of effective assignment modeling processes. Managed lanes are particularly effective when the road is functioning at near-capacity. Therefore, capturing variations in demand and network attributes and performance is crucial for their modeling, monitoring and operation. As a result, traditional modeling approaches, such as those used in static traffic assignment of demand forecasting models, fail to correctly predict the managed lane demand and the associated system performance. The present study demonstrates the power of the more advanced modeling approach of dynamic traffic assignment (DTA), as well as the shortcomings of conventional approaches, when used to model managed lanes in congested environments. In addition, the study develops processes to support an effective utilization of DTA to model managed lane operations. Static and dynamic traffic assignments consist of demand, network, and route choice model components that need to be calibrated. These components interact with each other, and an iterative method for calibrating them is needed. In this study, an effective standalone framework that combines static demand estimation and dynamic traffic assignment has been developed to replicate real-world traffic conditions. With advances in traffic surveillance technologies collecting, archiving, and analyzing traffic data is becoming more accessible and affordable. The present study shows how data from multiple sources can be integrated, validated, and best used in different stages of modeling and calibration of managed lanes. Extensive and careful processing of demand, traffic, and toll data, as well as proper definition of performance measures, result in a calibrated and stable model, which closely replicates real-world congestion patterns, and can reasonably respond to perturbations in network and demand properties.
Resumo:
In this project an optimal pose selection method for the calibration of an overconstrained Cable-Driven Parallel robot is presented. This manipulator belongs to a subcategory of parallel robots, where the classic rigid "legs" are replaced by cables. Cables are flexible elements that bring advantages and disadvantages to the robot modeling. For this reason, there are many open research issues, and the calibration of geometric parameters is one of them. The identification of the geometry of a robot, in particular, is usually called Kinematic Calibration. Many methods have been proposed in the past years for the solution of the latter problem. Although these methods are based on calibration using different kinematic models, when the robot’s geometry becomes more complex, their robustness and reliability decrease. This fact makes the selection of the calibration poses more complicated. The position and the orientation of the endeffector in the workspace become important in terms of selection. Thus, in general, it is necessary to evaluate the robustness of the chosen calibration method, by means, for example, of a parameter such as the observability index. In fact, it is known from the theory, that the maximization of the above mentioned index identifies the best choice of calibration poses, and consequently, using this pose set may improve the calibration process. The objective of this thesis is to analyze optimization algorithms which aim to calculate an optimal choice of poses both in quantitative and qualitative terms. Quantitatively, because it is of fundamental importance to understand how many poses are needed. Not necessarily a greater number of poses leads to a better result. Qualitatively, because it is useful to understand if the selected combination of poses actually gives additional information in the process of the identification of the parameters.
Resumo:
A rapid, sensitive and specific method for quantifying propylthiouracil in human plasma using methylthiouracil as the internal standard (IS) is described. The analyte and the IS were extracted from plasma by liquid-liquid extraction using an organic solvent (ethyl acetate). The extracts were analyzed by high performance liquid chromatography coupled with electrospray tandem mass spectrometry (HPLC-MS/MS) in negative mode (ES-). Chromatography was performed using a Phenomenex Gemini C18 5μm analytical column (4.6mm×150mm i.d.) and a mobile phase consisting of methanol/water/acetonitrile (40/40/20, v/v/v)+0.1% of formic acid. For propylthiouracil and I.S., the optimized parameters of the declustering potential, collision energy and collision exit potential were -60 (V), -26 (eV) and -5 (V), respectively. The method had a chromatographic run time of 2.5min and a linear calibration curve over the range 20-5000ng/mL. The limit of quantification was 20ng/mL. The stability tests indicated no significant degradation. This HPLC-MS/MS procedure was used to assess the bioequivalence of two propylthiouracil 100mg tablet formulations in healthy volunteers of both sexes in fasted and fed state. The geometric mean and 90% confidence interval CI of Test/Reference percent ratios were, without and with food, respectively: 109.28% (103.63-115.25%) and 115.60% (109.03-122.58%) for Cmax, 103.31% (100.74-105.96%) and 103.40% (101.03-105.84) for AUClast. This method offers advantages over those previously reported, in terms of both a simple liquid-liquid extraction without clean-up procedures, as well as a faster run time (2.5min). The LOQ of 20ng/mL is well suited for pharmacokinetic studies. The assay performance results indicate that the method is precise and accurate enough for the routine determination of the propylthiouracil in human plasma. The test formulation with and without food was bioequivalent to reference formulation. Food administration increased the Tmax and decreased the bioavailability (Cmax and AUC).
Resumo:
Current data indicate that the size of high-density lipoprotein (HDL) may be considered an important marker for cardiovascular disease risk. We established reference values of mean HDL size and volume in an asymptomatic representative Brazilian population sample (n=590) and their associations with metabolic parameters by gender. Size and volume were determined in HDL isolated from plasma by polyethyleneglycol precipitation of apoB-containing lipoproteins and measured using the dynamic light scattering (DLS) technique. Although the gender and age distributions agreed with other studies, the mean HDL size reference value was slightly lower than in some other populations. Both HDL size and volume were influenced by gender and varied according to age. HDL size was associated with age and HDL-C (total population); non- white ethnicity and CETP inversely (females); HDL-C and PLTP mass (males). On the other hand, HDL volume was determined only by HDL-C (total population and in both genders) and by PLTP mass (males). The reference values for mean HDL size and volume using the DLS technique were established in an asymptomatic and representative Brazilian population sample, as well as their related metabolic factors. HDL-C was a major determinant of HDL size and volume, which were differently modulated in females and in males.
Resumo:
This study sought to analyse the behaviour of the average spinal posture using a novel investigative procedure in a maximal incremental effort test performed on a treadmill. Spine motion was collected via stereo-photogrammetric analysis in thirteen amateur athletes. At each time percentage of the gait cycle, the reconstructed spine points were projected onto the sagittal and frontal planes of the trunk. On each plane, a polynomial was fitted to the data, and the two-dimensional geometric curvature along the longitudinal axis of the trunk was calculated to quantify the geometric shape of the spine. The average posture presented at the gait cycle defined the spine Neutral Curve. This method enabled the lateral deviations, lordosis, and kyphosis of the spine to be quantified noninvasively and in detail. The similarity between each two volunteers was a maximum of 19% on the sagittal plane and 13% on the frontal (p<0.01). The data collected in this study can be considered preliminary evidence that there are subject-specific characteristics in spinal curvatures during running. Changes induced by increases in speed were not sufficient for the Neutral Curve to lose its individual characteristics, instead behaving like a postural signature. The data showed the descriptive capability of a new method to analyse spinal postures during locomotion; however, additional studies, and with larger sample sizes, are necessary for extracting more general information from this novel methodology.
Resumo:
The objective of this study is to verify the dynamics between fiscal policy, measured by public debt, and monetary policy, measured by a reaction function of a central bank. Changes in monetary policies due to deviations from their targets always generate fiscal impacts. We examine two policy reaction functions: the first related to inflation targets and the second related to economic growth targets. We find that the condition for stable equilibrium is more restrictive in the first case than in the second. We then apply our simulation model to Brazil and United Kingdom and find that the equilibrium is unstable in the Brazilian case but stable in the UK case.
Resumo:
Cancer is a multistep process that begins with the transformation of normal epithelial cells and continues with tumor growth, stromal invasion and metastasis. The remodeling of the peritumoral environment is decisive for the onset of tumor invasiveness. This event is dependent on epithelial-stromal interactions, degradation of extracellular matrix components and reorganization of fibrillar components. Our research group has studied in a new proposed rodent model the participation of cellular and molecular components in the prostate microenvironment that contributes to cancer progression. Our group adopted the gerbil Meriones unguiculatus as an alternative experimental model for prostate cancer study. This model has presented significant responses to hormonal treatments and to development of spontaneous and induced neoplasias. The data obtained indicate reorganization of type I collagen fibers and reticular fibers, synthesis of new components such as tenascin and proteoglycans, degradation of basement membrane components and elastic fibers and increased expression of metalloproteinases. Fibroblasts that border the region, apparently participate in the stromal reaction. The roles of each of these events, as well as some signaling molecules, participants of neoplastic progression and factors that promote genetic reprogramming during epithelial-stromal transition are also discussed.
Resumo:
Evolving interfaces were initially focused on solutions to scientific problems in Fluid Dynamics. With the advent of the more robust modeling provided by Level Set method, their original boundaries of applicability were extended. Specifically to the Geometric Modeling area, works published until then, relating Level Set to tridimensional surface reconstruction, centered themselves on reconstruction from a data cloud dispersed in space; the approach based on parallel planar slices transversal to the object to be reconstructed is still incipient. Based on this fact, the present work proposes to analyse the feasibility of Level Set to tridimensional reconstruction, offering a methodology that simultaneously integrates the proved efficient ideas already published about such approximation and the proposals to process the inherent limitations of the method not satisfactorily treated yet, in particular the excessive smoothing of fine characteristics of contours evolving under Level Set. In relation to this, the application of the variant Particle Level Set is suggested as a solution, for its intrinsic proved capability to preserve mass of dynamic fronts. At the end, synthetic and real data sets are used to evaluate the presented tridimensional surface reconstruction methodology qualitatively.