578 resultados para Delphi method
Resumo:
Many websites presently provide the facility for users to rate items quality based on user opinion. These ratings are used later to produce item reputation scores. The majority of websites apply the mean method to aggregate user ratings. This method is very simple and is not considered as an accurate aggregator. Many methods have been proposed to make aggregators produce more accurate reputation scores. In the majority of proposed methods the authors use extra information about the rating providers or about the context (e.g. time) in which the rating was given. However, this information is not available all the time. In such cases these methods produce reputation scores using the mean method or other alternative simple methods. In this paper, we propose a novel reputation model that generates more accurate item reputation scores based on collected ratings only. Our proposed model embeds statistical data, previously disregarded, of a given rating dataset in order to enhance the accuracy of the generated reputation scores. In more detail, we use the Beta distribution to produce weights for ratings and aggregate ratings using the weighted mean method. Experiments show that the proposed model exhibits performance superior to that of current state-of-the-art models.
Resumo:
This paper presents a novel three-dimensional hybrid smoothed finite element method (H-SFEM) for solid mechanics problems. In 3D H-SFEM, the strain field is assumed to be the weighted average between compatible strains from the finite element method (FEM) and smoothed strains from the node-based smoothed FEM with a parameter α equipped into H-SFEM. By adjusting α, the upper and lower bound solutions in the strain energy norm and eigenfrequencies can always be obtained. The optimized α value in 3D H-SFEM using a tetrahedron mesh possesses a close-to-exact stiffness of the continuous system, and produces ultra-accurate solutions in terms of displacement, strain energy and eigenfrequencies in the linear and nonlinear problems. The novel domain-based selective scheme is proposed leading to a combined selective H-SFEM model that is immune from volumetric locking and hence works well for nearly incompressible materials. The proposed 3D H-SFEM is an innovative and unique numerical method with its distinct features, which has great potential in the successful application for solid mechanics problems.
Resumo:
Quantifying the stiffness properties of soft tissues is essential for the diagnosis of many cardiovascular diseases such as atherosclerosis. In these pathologies it is widely agreed that the arterial wall stiffness is an indicator of vulnerability. The present paper focuses on the carotid artery and proposes a new inversion methodology for deriving the stiffness properties of the wall from cine-MRI (magnetic resonance imaging) data. We address this problem by setting-up a cost function defined as the distance between the modeled pixel signals and the measured ones. Minimizing this cost function yields the unknown stiffness properties of both the arterial wall and the surrounding tissues. The sensitivity of the identified properties to various sources of uncertainty is studied. Validation of the method is performed on a rubber phantom. The elastic modulus identified using the developed methodology lies within a mean error of 9.6%. It is then applied to two young healthy subjects as a proof of practical feasibility, with identified values of 625 kPa and 587 kPa for one of the carotid of each subject.
Resumo:
Rupture of atherosclerotic plaque is a major cause of mortality. Plaque stress analysis, based on patient-specific multisequence in vivo MRI, can provide critical information for the understanding of plaque rupture and could eventually lead to plaque rupture prediction. However, the direct link between stress and plaque rupture is not fully understood. In the present study, the plaque from a patient who recently experienced a transient ischaemic attack (TIA) was studied using a fluid-structure interaction method to quantify stress distribution in the plaque region based on in vivo MR images. The results showed that wall shear stress is generally low in the artery with a slight increase at the plaque throat owing to minor luminal narrowing. The oscillatory shear index is much higher in the proximal part of the plaque. Both local wall stress concentrations and the relative stress variation distribution during a cardiac cycle indicate that the actual plaque rupture site is collocated with the highest rupture risk region in the studied patient.
Resumo:
The mechanical properties of arterial walls have long been recognized to play an essential role in the development and progression of cardiovascular disease (CVD). Early detection of variations in the elastic modulus of arteries would help in monitoring patients at high cardiovascular risk stratifying them according to risk. An in vivo, non-invasive, high resolution MR-phase-contrast based method for the estimation of the time-dependent elastic modulus of healthy arteries was developed, validated in vitro by means of a thin walled silicon rubber tube integrated into an existing MR-compatible flow simulator and used on healthy volunteers. A comparison of the elastic modulus of the silicon tube measured from the MRI-based technique with direct measurements confirmed the method's capability. The repeatability of the method was assessed. Viscoelastic and inertial effects characterizing the dynamic response of arteries in vivo emerged from the comparison of the pressure waveform and the area variation curve over a period. For all the volunteers who took part in the study the elastic modulus was found to be in the range 50-250 kPa, to increase during the rising part of the cycle, and to decrease with decreasing pressure during the downstroke of systole and subsequent diastole.
Resumo:
The quality of ultrasound computed tomography imaging is primarily determined by the accuracy of ultrasound transit time measurement. A major problem in analysis is the overlap of signals making it difficult to detect the correct transit time. The current standard is to apply a matched-filtering approach to the input and output signals. This study compares the matched-filtering technique with active set deconvolution to derive a transit time spectrum from a coded excitation chirp signal and the measured output signal. The ultrasound wave travels in a direct and a reflected path to the receiver, resulting in an overlap in the recorded output signal. The matched-filtering and deconvolution techniques were applied to determine the transit times associated with the two signal paths. Both techniques were able to detect the two different transit times; while matched-filtering has a better accuracy (0.13 μs vs. 0.18 μs standard deviation), deconvolution has a 3.5 times improved side-lobe to main-lobe ratio. A higher side-lobe suppression is important to further improve image fidelity. These results suggest that a future combination of both techniques would provide improved signal detection and hence improved image fidelity.
Resumo:
The method of generalized estimating equations (GEEs) provides consistent estimates of the regression parameters in a marginal regression model for longitudinal data, even when the working correlation model is misspecified (Liang and Zeger, 1986). However, the efficiency of a GEE estimate can be seriously affected by the choice of the working correlation model. This study addresses this problem by proposing a hybrid method that combines multiple GEEs based on different working correlation models, using the empirical likelihood method (Qin and Lawless, 1994). Analyses show that this hybrid method is more efficient than a GEE using a misspecified working correlation model. Furthermore, if one of the working correlation structures correctly models the within-subject correlations, then this hybrid method provides the most efficient parameter estimates. In simulations, the hybrid method's finite-sample performance is superior to a GEE under any of the commonly used working correlation models and is almost fully efficient in all scenarios studied. The hybrid method is illustrated using data from a longitudinal study of the respiratory infection rates in 275 Indonesian children.
Resumo:
The extended recruitment season for short-lived species such as prawns biases the estimation of growth parameters from length-frequency data when conventional methods are used. We propose a simple method for overcoming this bias given a time series of length-frequency data. The difficulties arising from extended recruitment are eliminated by predicting the growth of the succeeding samples and the length increments of the recruits in previous samples. This method requires that some maximum size at recruitment can be specified. The advantages of this multiple length-frequency method are: it is simple to use; it requires only three parameters; no specific distributions need to be assumed; and the actual seasonal recruitment pattern does not have to be specified. We illustrate the new method with length-frequency data on the tiger prawn Penaeus esculentus from the north-western Gulf of Carpentaria, Australia.
Resumo:
We propose a simple method of constructing quasi-likelihood functions for dependent data based on conditional-mean-variance relationships, and apply the method to estimating the fractal dimension from box-counting data. Simulation studies were carried out to compare this method with the traditional methods. We also applied this technique to real data from fishing grounds in the Gulf of Carpentaria, Australia
Resumo:
The primary goal of a phase I trial is to find the maximally tolerated dose (MTD) of a treatment. The MTD is usually defined in terms of a tolerable probability, q*, of toxicity. Our objective is to find the highest dose with toxicity risk that does not exceed q*, a criterion that is often desired in designing phase I trials. This criterion differs from that of finding the dose with toxicity risk closest to q*, that is used in methods such as the continual reassessment method. We use the theory of decision processes to find optimal sequential designs that maximize the expected number of patients within the trial allocated to the highest dose with toxicity not exceeding q*, among the doses under consideration. The proposed method is very general in the sense that criteria other than the one considered here can be optimized and that optimal dose assignment can be defined in terms of patients within or outside the trial. It includes as an important special case the continual reassessment method. Numerical study indicates the strategy compares favourably with other phase I designs.
Resumo:
A simple stochastic model of a fish population subject to natural and fishing mortalities is described. The fishing effort is assumed to vary over different periods but to be constant within each period. A maximum-likelihood approach is developed for estimating natural mortality (M) and the catchability coefficient (q) simultaneously from catch-and-effort data. If there is not enough contrast in the data to provide reliable estimates of both M and q, as is often the case in practice, the method can be used to obtain the best possible values of q for a range of possible values of M. These techniques are illustrated with tiger prawn (Penaeus semisulcatus) data from the Northern Prawn Fishery of Australia.
Resumo:
In the analysis of tagging data, it has been found that the least-squares method, based on the increment function known as the Fabens method, produces biased estimates because individual variability in growth is not allowed for. This paper modifies the Fabens method to account for individual variability in the length asymptote. Significance tests using t-statistics or log-likelihood ratio statistics may be applied to show the level of individual variability. Simulation results indicate that the modified method reduces the biases in the estimates to negligible proportions. Tagging data from tiger prawns (Penaeus esculentus and Penaeus semisulcatus) and rock lobster (Panulirus ornatus) are analysed as an illustration.
Resumo:
Traditional comparisons between the capture efficiency of sampling devices have generally looked at the absolute differences between devices. We recommend that the signal-to-noise ratio be used when comparing the capture efficiency of benthic sampling devices. Using the signal-to-noise ratio rather than the absolute difference has the advantages that the variance is taken into account when determining how important the difference is, the hypothesis and minimum detectable difference can be made identical for all taxa, it is independent of the units used for measurement, and the sample-size calculation is independent of the variance. This new technique is illustrated by comparing the capture efficiency of a 0.05 m(2) van Veen grab and an airlift suction device, using samples taken from Heron and One Tree lagoons, Australia.
Resumo:
In this paper we present a novel application of scenario methods to engage a diverse constituency of senior stakeholders, with limited time availability, in debate to inform planning and policy development. Our case study project explores post-carbon futures for the Latrobe Valley region of the Australian state of Victoria. Our approach involved initial deductive development of two ‘extreme scenarios’ by a multi-disciplinary research team, based upon an extensive research programme. Over four workshops with the stakeholder constituency, these initial scenarios were discussed, challenged, refined and expanded through an inductive process, whereby participants took ‘ownership’ of a final set of three scenarios. These were both comfortable and challenging to them. The outcomes of this process subsequently informed public policy development for the region. Whilst this process did not follow a single extant structured, multi-stage scenario approach, neither was it devoid of form. Here, we seek to theorise and codify elements of our process – which we term ‘scenario improvisation’ – such that others may adopt it.
Resumo:
Heavy haul railway lines are important and expensive items of infrastructure operating in an environment which is increasingly focussed on risk-based management and constrained profit margins. It is vital that costs are minimised but also that infrastructure satisfies failure criteria and standards of reliability which account for the random nature of wheel-rail forces and of the properties of the materials in the track. In Australia and the USA, concrete railway sleepers/ties are still designed using methods which the rest of the civil engineering world discarded decades ago in favour of the more rational, more economical and probabilistically based, limit states design (LSD) concept. This paper describes a LSD method for concrete sleepers which is based on (a) billions of measurements over many years of the real, random wheel-rail forces on heavy haul lines, and (b) the true capacity of sleepers. The essential principles on which the new method is based are similar to current, widely used LSD-based standards for concrete structures. The paper proposes and describes four limit states which a sleeper must satisfy, namely: strength; operations; serviceability; and fatigue. The method has been applied commercially to two new major heavy haul lines in Australia, where it has saved clients millions of dollars in capital expenditure.