921 resultados para Algebraic Curve


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Travel speed is one of the most critical parameters for road safety; the evidence suggests that increased vehicle speed is associated with higher crash risk and injury severity. Both naturalistic and simulator studies have reported that drivers distracted by a mobile phone select a lower driving speed. Speed decrements have been argued to be a risk compensatory behaviour of distracted drivers. Nonetheless, the extent and circumstances of the speed change among distracted drivers are still not known very well. As such, the primary objective of this study was to investigate patterns of speed variation in relation to contextual factors and distraction. Using the CARRS-Q high-fidelity Advanced Driving Simulator, the speed selection behaviour of 32 drivers aged 18-26 years was examined in two phone conditions: baseline (no phone conversation) and handheld phone operation. The simulator driving route contained five different types of road traffic complexities, including one road section with a horizontal S curve, one horizontal S curve with adjacent traffic, one straight segment of suburban road without traffic, one straight segment of suburban road with traffic interactions, and one road segment in a city environment. Speed deviations from the posted speed limit were analysed using Ward’s Hierarchical Clustering method to identify the effects of road traffic environment and cognitive distraction. The speed deviations along curved road sections formed two different clusters for the two phone conditions, implying that distracted drivers adopt a different strategy for selecting driving speed in a complex driving situation. In particular, distracted drivers selected a lower speed while driving along a horizontal curve. The speed deviation along the city road segment and other straight road segments grouped into a different cluster, and the deviations were not significantly different across phone conditions, suggesting a negligible effect of distraction on speed selection along these road sections. Future research should focus on developing a risk compensation model to explain the relationship between road traffic complexity and distraction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a novel algebraic formulation of the central problem of screw theory, namely the determination of the principal screws of a given system. Using the algebra of dual numbers, it shows that the principal screws can be determined via the solution of a generalised eigenproblem of two real, symmetric matrices. This approach allows the study of the principal screws of the general two-, three-systems associated with a manipulator of arbitrary geometry in terms of closed-form expressions of its architecture and configuration parameters. We also present novel methods for the determination of the principal screws for four-, five-systems which do not require the explicit computation of the reciprocal systems. Principal screws of the systems of different orders are identified from one uniform criterion, namely that the pitches of the principal screws are the extreme values of the pitch.The classical results of screw theory, namely the equations for the cylindroid and the pitch-hyperboloid associated with the two-and three-systems, respectively have been derived within the proposed framework. Algebraic conditions have been derived for some of the special screw systems. The formulation is also illustrated with several examples including two spatial manipulators of serial and parallel architecture, respectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cum ./LSTA_A_8828879_O_XML_IMAGES/LSTA_A_8828879_O_ILM0001.gif rule [Singh (1975)] has been suggested in the literature for finding approximately optimum strata boundaries for proportional allocation, when the stratification is done on the study variable. This paper shows that for the class of density functions arising from the Wang and Aggarwal (1984) representation of the Lorenz Curve (or DBV curves in case of inventory theory), the cum ./LSTA_A_8828879_O_XML_IMAGES/LSTA_A_8828879_O_ILM0002.gif rule in place of giving approximately optimum strata boundaries, yields exactly optimum boundaries. It is also shown that the conjecture of Mahalanobis (1952) “. . .an optimum or nearly optimum solutions will be obtained when the expected contribution of each stratum to the total aggregate value of Y is made equal for all strata” yields exactly optimum strata boundaries for the case considered in the paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The mechanical properties of arterial walls have long been recognized to play an essential role in the development and progression of cardiovascular disease (CVD). Early detection of variations in the elastic modulus of arteries would help in monitoring patients at high cardiovascular risk stratifying them according to risk. An in vivo, non-invasive, high resolution MR-phase-contrast based method for the estimation of the time-dependent elastic modulus of healthy arteries was developed, validated in vitro by means of a thin walled silicon rubber tube integrated into an existing MR-compatible flow simulator and used on healthy volunteers. A comparison of the elastic modulus of the silicon tube measured from the MRI-based technique with direct measurements confirmed the method's capability. The repeatability of the method was assessed. Viscoelastic and inertial effects characterizing the dynamic response of arteries in vivo emerged from the comparison of the pressure waveform and the area variation curve over a period. For all the volunteers who took part in the study the elastic modulus was found to be in the range 50-250 kPa, to increase during the rising part of the cycle, and to decrease with decreasing pressure during the downstroke of systole and subsequent diastole.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Computation of the dependency basis is the fundamental step in solving the implication problem for MVDs in relational database theory. We examine this problem from an algebraic perspective. We introduce the notion of the inference basis of a set M of MVDs and show that it contains the maximum information about the logical consequences of M. We propose the notion of an MVD-lattice and develop an algebraic characterization of the inference basis using simple notions from lattice theory. We also establish several properties of MVD-lattices related to the implication problem. Founded on our characterization, we synthesize efficient algorithms for (a) computing the inference basis of a given set M of MVDs; (b) computing the dependency basis of a given attribute set w.r.t. M; and (c) solving the implication problem for MVDs. Finally, we show that our results naturally extend to incorporate FDs also in a way that enables the solution of the implication problem for both FDs and MVDs put together.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Developing accurate and reliable crop detection algorithms is an important step for harvesting automation in horticulture. This paper presents a novel approach to visual detection of highly-occluded fruits. We use a conditional random field (CRF) on multi-spectral image data (colour and Near-Infrared Reflectance, NIR) to model two classes: crop and background. To describe these two classes, we explore a range of visual-texture features including local binary pattern, histogram of oriented gradients, and learn auto-encoder features. The pro-posed methods are evaluated using hand-labelled images from a dataset captured on a commercial capsicum farm. Experimental results are presented, and performance is evaluated in terms of the Area Under the Curve (AUC) of the precision-recall curves.Our current results achieve a maximum performance of 0.81AUC when combining all of the texture features in conjunction with colour information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The high-temperature paraelectric phase of dicalcium lead propionate, DCLP, at 363 ± 5 K is tetragonal, with a = 12.574 (6), c = 17.403 (9) Å, V = 2751.4 Å3, Z = 4 and corresponds to the space group P41212 (or P43212). The thermal expansion curve shows the transition somewhere between 328 and 343 K.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Liquids of silver-copper alloys with near eutectic compositions embedded in a copper matrix were undercooled. The structural and microstructural investigations of these alloys solidified from undercooled temperature indicated the absence of both the eutectic reaction and diffusionless transformation below the equal free energy curve (T0). Instead the liquid maintained local equilibrium with the copper dendrites continuously until it intersected the extended solidus of the silver rich solid solution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We derive a new method for determining size-transition matrices (STMs) that eliminates probabilities of negative growth and accounts for individual variability. STMs are an important part of size-structured models, which are used in the stock assessment of aquatic species. The elements of STMs represent the probability of growth from one size class to another, given a time step. The growth increment over this time step can be modelled with a variety of methods, but when a population construct is assumed for the underlying growth model, the resulting STM may contain entries that predict negative growth. To solve this problem, we use a maximum likelihood method that incorporates individual variability in the asymptotic length, relative age at tagging, and measurement error to obtain von Bertalanffy growth model parameter estimates. The statistical moments for the future length given an individual's previous length measurement and time at liberty are then derived. We moment match the true conditional distributions with skewed-normal distributions and use these to accurately estimate the elements of the STMs. The method is investigated with simulated tag-recapture data and tag-recapture data gathered from the Australian eastern king prawn (Melicertus plebejus).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the development of statistical models for prediction of constituent concentration of riverine pollutants, which is a key step in load estimation from frequent flow rate data and less frequently collected concentration data. We consider how to capture the impacts of past flow patterns via the average discounted flow (ADF) which discounts the past flux based on the time lapsed - more recent fluxes are given more weight. However, the effectiveness of ADF depends critically on the choice of the discount factor which reflects the unknown environmental cumulating process of the concentration compounds. We propose to choose the discount factor by maximizing the adjusted R-2 values or the Nash-Sutcliffe model efficiency coefficient. The R2 values are also adjusted to take account of the number of parameters in the model fit. The resulting optimal discount factor can be interpreted as a measure of constituent exhaustion rate during flood events. To evaluate the performance of the proposed regression estimators, we examine two different sampling scenarios by resampling fortnightly and opportunistically from two real daily datasets, which come from two United States Geological Survey (USGS) gaging stations located in Des Plaines River and Illinois River basin. The generalized rating-curve approach produces biased estimates of the total sediment loads by -30% to 83%, whereas the new approaches produce relatively much lower biases, ranging from -24% to 35%. This substantial improvement in the estimates of the total load is due to the fact that predictability of concentration is greatly improved by the additional predictors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The contemporary methodology for growth models of organisms is based on continuous trajectories and thus it hinders us from modelling stepwise growth in crustacean populations. Growth models for fish are normally assumed to follow a continuous function, but a different type of model is needed for crustacean growth. Crustaceans must moult in order for them to grow. The growth of crustaceans is a discontinuous process due to the periodical shedding of the exoskeleton in moulting. The stepwise growth of crustaceans through the moulting process makes the growth estimation more complex. Stochastic approaches can be used to model discontinuous growth or what are commonly known as "jumps" (Figure 1). However, in stochastic growth model we need to ensure that the stochastic growth model results in only positive jumps. In view of this, we will introduce a subordinator that is a special case of a Levy process. A subordinator is a non-decreasing Levy process, that will assist in modelling crustacean growth for better understanding of the individual variability and stochasticity in moulting periods and increments. We develop the estimation methods for parameter estimation and illustrate them with the help of a dataset from laboratory experiments. The motivational dataset is from the ornate rock lobster, Panulirus ornatus, which can be found between Australia and Papua New Guinea. Due to the presence of sex effects on the growth (Munday et al., 2004), we estimate the growth parameters separately for each sex. Since all hard parts are shed too often, the exact age determination of a lobster can be challenging. However, the growth parameters for the aforementioned moult processes from tank data being able to estimate through: (i) inter-moult periods, and (ii) moult increment. We will attempt to derive a joint density, which is made up of two functions: one for moult increments and the other for time intervals between moults. We claim these functions are conditionally independent given pre-moult length and the inter-moult periods. The variables moult increments and inter-moult periods are said to be independent because of the Markov property or conditional probability. Hence, the parameters in each function can be estimated separately. Subsequently, we integrate both of the functions through a Monte Carlo method. We can therefore obtain a population mean for crustacean growth (e. g. red curve in Figure 1). [GRAPHICS]

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider estimating the total load from frequent flow data but less frequent concentration data. There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates that minimizes the biases and makes use of informative predictive variables. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized rating-curve approach with additional predictors that capture unique features in the flow data, such as the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and the discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. Forming this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach for two rivers delivering to the Great Barrier Reef, Queensland, Australia. One is a data set from the Burdekin River, and consists of the total suspended sediment (TSS) and nitrogen oxide (NO(x)) and gauged flow for 1997. The other dataset is from the Tully River, for the period of July 2000 to June 2008. For NO(x) Burdekin, the new estimates are very similar to the ratio estimates even when there is no relationship between the concentration and the flow. However, for the Tully dataset, by incorporating the additional predictive variables namely the discounted flow and flow phases (rising or recessing), we substantially improved the model fit, and thus the certainty with which the load is estimated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates by minimizing the biases and making use of possible predictive variables. The load estimation procedure can be summarized by the following four steps: - (i) output the flow rates at regular time intervals (e.g. 10 minutes) using a time series model that captures all the peak flows; - (ii) output the predicted flow rates as in (i) at the concentration sampling times, if the corresponding flow rates are not collected; - (iii) establish a predictive model for the concentration data, which incorporates all possible predictor variables and output the predicted concentrations at the regular time intervals as in (i), and; - (iv) obtain the sum of all the products of the predicted flow and the predicted concentration over the regular time intervals to represent an estimate of the load. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized regression (rating-curve) approach with additional predictors that capture unique features in the flow data, namely the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and cumulative discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. The model also has the capacity to accommodate autocorrelation in model errors which are the result of intensive sampling during floods. Incorporating this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach using the concentrations of total suspended sediment (TSS) and nitrogen oxide (NOx) and gauged flow data from the Burdekin River, a catchment delivering to the Great Barrier Reef. The sampling biases for NOx concentrations range from 2 to 10 times indicating severe biases. As we expect, the traditional average and extrapolation methods produce much higher estimates than those when bias in sampling is taken into account.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Fabens method is commonly used to estimate growth parameters k and l infinity in the von Bertalanffy model from tag-recapture data. However, the Fabens method of estimation has an inherent bias when individual growth is variable. This paper presents an asymptotically unbiassed method using a maximum likelihood approach that takes account of individual variability in both maximum length and age-at-tagging. It is assumed that each individual's growth follows a von Bertalanffy curve with its own maximum length and age-at-tagging. The parameter k is assumed to be a constant to ensure that the mean growth follows a von Bertalanffy curve and to avoid overparameterization. Our method also makes more efficient use nf thp measurements at tno and recapture and includes diagnostic techniques for checking distributional assumptions. The method is reasonably robust and performs better than the Fabens method when individual growth differs from the von Bertalanffy relationship. When measurement error is negligible, the estimation involves maximizing the profile likelihood of one parameter only. The method is applied to tag-recapture data for the grooved tiger prawn (Penaeus semisulcatus) from the Gulf of Carpentaria, Australia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The bentiromide test was evaluated using plasma p-aminobenzoic acid as an indirect test of pancreatic insufficiency in young children between 2 months and 4 years of age. To determine the optimal test method, the following were examined: (a) the best dose of bentiromide (15 mg/kg or 30 mg/kg); (b) the optimal sampling time for plasma p-aminobenzoic acid, and; (c) the effect of coadministration of a liquid meal. Sixty-nine children (1.6 ± 1.0 years) were studied, including 34 controls with normal fat absorption and 35 patients (34 with cystic fibrosis) with fat maldigestion due to pancreatic insufficiency. Control and pancreatic insufficient subjects were studied in three age-matched groups: (a) low-dose bentiromide (15 mg/kg) with clear fluids; (b) high-dose bentiromide (30 mg/kg) with clear fluids, and; (c) high-dose bentiromide with a liquid meal. Plasma p-aminobenzoic acid was determined at 0, 30, 60, and 90 minutes then hourly for 6 hours. The dose effect of bentiromide with clear liquids was evaluated. High-dose bentiromide best discriminated control and pancreatic insufficient subjects, due to a higher peak plasma p-aminobenzoic acid level in controls, but poor sensitivity and specificity remained. High-dose bentiromide with a liquid meal produced a delayed increase in plasma p-aminobenzoic acid in the control subjects probably caused by retarded gastric emptying. However, in the pancreatic insufficient subjects, use of a liquid meal resulted in significantly lower plasma p-aminobenzoic acid levels at all time points; plasma p-aminobenzoic acid at 2 and 3 hours completely discriminated between control and pancreatic insufficient patients. Evaluation of the data by area under the time-concentration curve failed to improve test results. In conclusion, the bentiromide test is a simple, clinically useful means of detecting pancreatic insufficiency in young children, but a higher dose administered with a liquid meal is recommended.