944 resultados para Piecewise Convex Curves


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main focus of this paper is the motion planning problem for a deeply submerged rigid body. The equations of motion are formulated and presented by use of the framework of differential geometry and these equations incorporate external dissipative and restoring forces. We consider a kinematic reduction of the affine connection control system for the rigid body submerged in an ideal fluid, and present an extension of this reduction to the forced affine connection control system for the rigid body submerged in a viscous fluid. The motion planning strategy is based on kinematic motions; the integral curves of rank one kinematic reductions. This method is of particular interest to autonomous underwater vehicles which can not directly control all six degrees of freedom (such as torpedo shaped AUVs) or in case of actuator failure (i.e., under-actuated scenario). A practical example is included to illustrate our technique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper discusses control strategies adapted for practical implementation and efficient motion of underwater vehicles. These trajectories are piecewise constant thrust arcs with few actuator switchings. We provide the numerical algorithm which computes the time efficient trajectories parameterized by the switching times. We discuss both the theoretical analysis and experimental implementation results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper studies the practical but challenging problem of motion planning for a deeply submerged rigid body. Here, we formulate the dynamic equations of motion of a submerged rigid body under the architecture of differential geometric mechanics and include external dissipative and potential forces. The mechanical system is represented as a forced affine-connection control system on the configuration space SE(3). Solutions to the motion planning problem are computed by concatenating and reparameterizing the integral curves of decoupling vector fields. We provide an extension to this inverse kinematic method to compensate for external potential forces caused by buoyancy and gravity. We present a mission scenario and implement the theoretically computed control strategy onto a test-bed autonomous underwater vehicle. This scenario emphasizes the use of this motion planning technique in the under-actuated situation; the vehicle loses direct control on one or more degrees of freedom. We include experimental results to illustrate our technique and validate our method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The success rate of carrier phase ambiguity resolution (AR) is the probability that the ambiguities are successfully fixed to their correct integer values. In existing works, an exact success rate formula for integer bootstrapping estimator has been used as a sharp lower bound for the integer least squares (ILS) success rate. Rigorous computation of success rate for the more general ILS solutions has been considered difficult, because of complexity of the ILS ambiguity pull-in region and computational load of the integration of the multivariate probability density function. Contributions of this work are twofold. First, the pull-in region mathematically expressed as the vertices of a polyhedron is represented by a multi-dimensional grid, at which the cumulative probability can be integrated with the multivariate normal cumulative density function (mvncdf) available in Matlab. The bivariate case is studied where the pull-region is usually defined as a hexagon and the probability is easily obtained using mvncdf at all the grid points within the convex polygon. Second, the paper compares the computed integer rounding and integer bootstrapping success rates, lower and upper bounds of the ILS success rates to the actual ILS AR success rates obtained from a 24 h GPS data set for a 21 km baseline. The results demonstrate that the upper bound probability of the ILS AR probability given in the existing literatures agrees with the actual ILS success rate well, although the success rate computed with integer bootstrapping method is a quite sharp approximation to the actual ILS success rate. The results also show that variations or uncertainty of the unit–weight variance estimates from epoch to epoch will affect the computed success rates from different methods significantly, thus deserving more attentions in order to obtain useful success probability predictions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: The enthesis of the plantar fascia is thought to play an important role in stress dissipation. However, the potential link between entheseal thickening characteristic of enthesopathy and the stress-dissipating properties of the intervening plantar fat pad have not been investigated. Purpose: This study was conducted to identify whether plantar fat pad mechanics explain variance in the thickness of the fascial enthesis in individuals with and without plantar enthesopathy. Study Design: Case-control study; Level of evidence, 3. Methods: The study population consisted of 9 patients with unilateral plantar enthesopathy and 9 asymptomatic, individually matched controls. The thickness of the enthesis of the symptomatic, asymptomatic, and a matched control limb was acquired using high-resolution ultrasound. The compressive strain of the plantar fat pad during walking was estimated from dynamic lateral radiographs acquired with a multifunction fluoroscopy unit. Peak compressive stress was simultaneously acquired via a pressure platform. Principal viscoelastic parameters were estimated from subsequent stress-strain curves. Results: The symptomatic fascial enthesis (6.7 ± 2.0 mm) was significantly thicker than the asymptomatic enthesis (4.2 ± 0.4 mm), which in turn was thicker than the enthesis (3.3 ± 0.4 mm) of control limbs (P < .05). There was no significant difference in the mean thickness, peak stress, peak strain, or secant modulus of the plantar fat pad between limbs. However, the energy dissipated by the fat pad during loading and unloading was significantly lower in the symptomatic limb (0.55 ± 0.17) when compared with asymptomatic (0.69 ± 0.13) and control (0.70 ± 0.09) limbs (P < .05). The sonographic thickness of the enthesis was correlated with the energy dissipation ratio of the plantar fat pad (r = .72, P < .05), but only in the symptomatic limb. Conclusion: The energy-dissipating properties of the plantar fat pad are associated with the sonograpic appearance of the enthesis in symptomatic limbs, providing a previously unidentified link between the mechanical behavior of the plantar fat pad and enthesopathy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Skid resistance is a condition parameter characterising the contribution that a road makes to the friction between a road surface and a vehicle tyre. Studies of traffic crash histories around the world have consistently found that a disproportionate number of crashes occur where the road surface has a low level of surface friction and/or surface texture, particularly when the road surface is wet. Various research results have been published over many years and have tried to quantify the influence of skid resistance on accident occurrence and to characterise a correlation between skid resistance and accident frequency. Most of the research studies used simple statistical correlation methods in analysing skid resistance and crash data.----- ------ Preliminary findings of a systematic and extensive literature search conclude that there is rarely a single causation factor in a crash. Findings from research projects do affirm various levels of correlation between skid resistance and accident occurrence. Studies indicate that the level of skid resistance at critical places such as intersections, curves, roundabouts, ramps and approaches to pedestrian crossings needs to be well maintained.----- ----- Management of risk is an integral aspect of the Queensland Department of Main Roads (QDMR) strategy for managing its infrastructure assets. The risk-based approach has been used in many areas of infrastructure engineering. However, very limited information is reported on using risk-based approach to mitigate crash rates related to road surface. Low skid resistance and surface texture may increase the risk of traffic crashes.----- ----- The objectives of this paper are to explore current issues of skid resistance in relation to crashes, to provide a framework of probability-based approach to be adopted by QDMR in assessing the relationship between crash accidents and pavement properties, and to explain why the probability-based approach is a suitable tool for QDMR in order to reduce accident rates due to skid resistance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Partially Grouted Reinforced Masonry (PGRM) shear walls perform well in places where the cyclonic wind pressure dominates the design. Their out-of-plane flexural performance is better understood than their inplane shear behaviour; in particular, it is not clear whether the PGRM shear walls act as unreinforced masonry (URM) walls embedded with discrete reinforced grouted cores or as integral systems of reinforced masonry (RM) with wider spacing of reinforcement. With a view to understanding the inplane response of PGRM shear walls, ten full scale single leaf, clay block walls were constructed and tested under monotonic and cyclic inplane loading cases. It has been shown that where the spacing of the vertical reinforcement is less than 2000mm, the walls behave as an integral system of RM; for spacing greater than 2000mm, the walls behave similar to URM with no significant benefit from the reinforced cores based on the displacement ductility and stiffness degradation factors derived from the complete lateral load – lateral displacement curves.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ABSTRACT Twelve beam-to-column connections between cold-formed steel sections consisting of three beam depths and four connection types were tested in isolation to investigate their behavior based on strength, stiffness and ductility. Resulting moment-rotation curves indicate that the tested connections are efficient moment connections where moment capacities ranged from about 65% to 100% of the connected beam capac-ity. With a moment capacity of greater than 80% of connected beam member capacity, some of the connec-tions can be regarded as full strength connections. Connections also possessed sufficient ductility with rota-tions of 20 mRad at failure although some connections were too ductile with rotations in excess of 30 mRad. Generally, most of the connections possess the strength and ductility to be considered as partial strength con-nections. The ultimate failures of almost all of the connections were due to local buckling of the compression web and flange elements of the beam closest to the connection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Waist circumference has been identified as a valuable predictor of cardiovascular risk in children. The development of waist circumference percentiles and cut-offs for various ethnic groups are necessary because of differences in body composition. The purpose of this study was to develop waist circumference percentiles for Chinese children and to explore optimal waist circumference cut-off values for predicting cardiovascular risk factors clustering in this population.----- ----- Methods: Height, weight, and waist circumference were measured in 5529 children (2830 boys and 2699 girls) aged 6-12 years randomly selected from southern and northern China. Blood pressure, fasting triglycerides, low-density lipoprotein cholesterol, high-density lipoprotein cholesterol, and glucose were obtained in a subsample (n = 1845). Smoothed percentile curves were produced using the LMS method. Receiver-operating characteristic analysis was used to derive the optimal age- and gender-specific waist circumference thresholds for predicting the clustering of cardiovascular risk factors.----- ----- Results: Gender-specific waist circumference percentiles were constructed. The waist circumference thresholds were at the 90th and 84th percentiles for Chinese boys and girls respectively, with sensitivity and specificity ranging from 67% to 83%. The odds ratio of a clustering of cardiovascular risk factors among boys and girls with a higher value than cut-off points was 10.349 (95% confidence interval 4.466 to 23.979) and 8.084 (95% confidence interval 3.147 to 20.767) compared with their counterparts.----- ----- Conclusions: Percentile curves for waist circumference of Chinese children are provided. The cut-off point for waist circumference to predict cardiovascular risk factors clustering is at the 90th and 84th percentiles for Chinese boys and girls, respectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a material model to simulate load induced cracking in Reinforced Concrete (RC) elements in ABAQUS finite element package. Two numerical material models are used and combined to simulate complete stress-strain behaviour of concrete under compression and tension including damage properties. Both numerical techniques used in the present material model are capable of developing the stress-strain curves including strain softening regimes only using ultimate compressive strength of concrete, which is easily and practically obtainable for many of the existing RC structures or those to be built. Therefore, the method proposed in this paper is valuable in assessing existing RC structures in the absence of more detailed test results. The numerical models are slightly modified from the original versions to be comparable with the damaged plasticity model used in ABAQUS. The model is validated using different experiment results for RC beam elements presented in the literature. The results indicate a good agreement with load vs. displacement curve and observed crack patterns.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A significant proportion of the cost of software development is due to software testing and maintenance. This is in part the result of the inevitable imperfections due to human error, lack of quality during the design and coding of software, and the increasing need to reduce faults to improve customer satisfaction in a competitive marketplace. Given the cost and importance of removing errors improvements in fault detection and removal can be of significant benefit. The earlier in the development process faults can be found, the less it costs to correct them and the less likely other faults are to develop. This research aims to make the testing process more efficient and effective by identifying those software modules most likely to contain faults, allowing testing efforts to be carefully targeted. This is done with the use of machine learning algorithms which use examples of fault prone and not fault prone modules to develop predictive models of quality. In order to learn the numerical mapping between module and classification, a module is represented in terms of software metrics. A difficulty in this sort of problem is sourcing software engineering data of adequate quality. In this work, data is obtained from two sources, the NASA Metrics Data Program, and the open source Eclipse project. Feature selection before learning is applied, and in this area a number of different feature selection methods are applied to find which work best. Two machine learning algorithms are applied to the data - Naive Bayes and the Support Vector Machine - and predictive results are compared to those of previous efforts and found to be superior on selected data sets and comparable on others. In addition, a new classification method is proposed, Rank Sum, in which a ranking abstraction is laid over bin densities for each class, and a classification is determined based on the sum of ranks over features. A novel extension of this method is also described based on an observed polarising of points by class when rank sum is applied to training data to convert it into 2D rank sum space. SVM is applied to this transformed data to produce models the parameters of which can be set according to trade-off curves to obtain a particular performance trade-off.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we identify the origins of stop-and-go (or slow-and-go) driving and measure microscopic features of their propagations by analyzing vehicle trajectories via Wavelet Transform. Based on 53 oscillation cases analyzed, we find that oscillations can be originated by either lane-changing maneuvers (LCMs) or car-following behavior (CF). LCMs were predominantly responsible for oscillation formations in the absence of considerable horizontal or vertical curves, whereas oscillations formed spontaneously near roadside work on an uphill segment. Regardless of the trigger, the features of oscillation propagations were similar in terms of propagation speed, oscillation duration, and amplitude. All observed cases initially exhibited a precursor phase, in which slow-and-go motions were localized. Some of them eventually transitioned into a well developed phase, in which oscillations propagated upstream in queue. LCMs were primarily responsible for the transition, although some transitions occurred without LCMs. Our findings also suggest that an oscillation has a regressive effect on car following behavior: a deceleration wave of an oscillation affects a timid driver (with larger response time and minimum spacing) to become less timid and an aggressive driver less aggressive, although this change may be short-lived. An extended framework of Newell’s CF is able to describe the regressive effects with two additional parameters with reasonable accuracy, as verified using vehicle trajectory data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Suburbanisation has been internationally a major phenomenon in the last decades. Suburb-to-suburb routes are nowadays the most widespread road journeys; and this resulted in an increment of distances travelled, particularly on faster suburban highways. The design of highways tends to over-simplify the driving task and this can result in decreased alertness. Driving behaviour is consequently impaired and drivers are then more likely to be involved in road crashes. This is particularly dangerous on highways where the speed limit is high. While effective countermeasures to this decrement in alertness do not currently exist, the development of in-vehicle sensors opens avenues for monitoring driving behaviour in real-time. The aim of this study is to evaluate in real-time the level of alertness of the driver through surrogate measures that can be collected from in-vehicle sensors. Slow EEG activity is used as a reference to evaluate driver's alertness. Data are collected in a driving simulator instrumented with an eye tracking system, a heart rate monitor and an electrodermal activity device (N=25 participants). Four different types of highways (driving scenario of 40 minutes each) are implemented through the variation of the road design (amount of curves and hills) and the roadside environment (amount of buildings and traffic). We show with Neural Networks that reduced alertness can be detected in real-time with an accuracy of 92% using lane positioning, steering wheel movement, head rotation, blink frequency, heart rate variability and skin conductance level. Such results show that it is possible to assess driver's alertness with surrogate measures. Such methodology could be used to warn drivers of their alertness level through the development of an in-vehicle device monitoring in real-time drivers' behaviour on highways, and therefore it could result in improved road safety.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thermogravimetry combined with evolved gas mass spectrometry has been used to ascertain the stability of the soil minerals destinezite and diadochite. These two minerals are identical except for their morphology. Diadochite is amorphous whereas destinezite is crystalline. Both minerals are found in soils. It is important to understand the stability of these minerals because soils are subject to bush fires especially in Australia. The thermal analysis patterns of the two minerals are similar but not identical. Subtle differences are observed in the DTG patterns. For destinezite, two DTG peaks are observed at 129 and 182°C attributed to the loss of hydration water, whereas only a broad peak with maximum at 84°C is observed for diadochite. Higher temperature mass losses at 685°C for destinezite and 655°C for diadochite, based upon the ion current curves, are due to sulphate decomposition. This research has shown that at low temperatures the minerals are stable but at high temperatures, as might be experienced in a bush fire, the minerals decompose.