892 resultados para Location precision


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Rupture of vulnerable atheromatous plaque in the carotid and coronary arteries often leads to stroke and heart attack respectively. The role of calcium deposition and its contribution to plaque stability is controversial. This study uses both an idealized and a patient-specific model to evaluate the effect of a calcium deposit on the stress distribution within an atheromatous plaque. Methods: Using a finite-element method, structural analysis was performed on an idealized plaque model and the location of a calcium deposit within it was varied. In addition to the idealized model, in vivo high-resolution MR imaging was performed on 3 patients with carotid atheroma and stress distributions were generated. The individual plaques were chosen as they had calcium at varying locations with respect to the lumen and the fibrous cap. Results: The predicted maximum stress was increased by 47.5% when the calcium deposit was located in the thin fibrous cap in the model when compared with that in a model without a deposit. The result of adding a calcium deposit either to the lipid core or remote from the lumen resulted in almost no increase in maximal stress. Conclusion: Calcification at the thin fibrous cap may result in high stress concentrations, ultimately increasing the risk of plaque rupture. Assessing the location of calcification may, in the future, aid in the risk stratification of patients with carotid stenosis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

High resolution, USPIO-enhanced MR imaging can be used to identify inflamed atherosclerotic plaque. We report a case of a 79-year-old man with a symptomatic carotid stenosis of 82%. The plaque was retrieved for histology and finite element analysis (FEA) based on the preoperative MR imaging was used to predict maximal Von Mises stress on the plaque. Macrophage location correlated with maximal predicted stresses on the plaque. This supports the hypothesis that macrophages thin the fibrous cap at points of highest stress, leading to an increased risk of plaque rupture and subsequent stroke.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider estimating the total load from frequent flow data but less frequent concentration data. There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates that minimizes the biases and makes use of informative predictive variables. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized rating-curve approach with additional predictors that capture unique features in the flow data, such as the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and the discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. Forming this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach for two rivers delivering to the Great Barrier Reef, Queensland, Australia. One is a data set from the Burdekin River, and consists of the total suspended sediment (TSS) and nitrogen oxide (NO(x)) and gauged flow for 1997. The other dataset is from the Tully River, for the period of July 2000 to June 2008. For NO(x) Burdekin, the new estimates are very similar to the ratio estimates even when there is no relationship between the concentration and the flow. However, for the Tully dataset, by incorporating the additional predictive variables namely the discounted flow and flow phases (rising or recessing), we substantially improved the model fit, and thus the certainty with which the load is estimated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates by minimizing the biases and making use of possible predictive variables. The load estimation procedure can be summarized by the following four steps: - (i) output the flow rates at regular time intervals (e.g. 10 minutes) using a time series model that captures all the peak flows; - (ii) output the predicted flow rates as in (i) at the concentration sampling times, if the corresponding flow rates are not collected; - (iii) establish a predictive model for the concentration data, which incorporates all possible predictor variables and output the predicted concentrations at the regular time intervals as in (i), and; - (iv) obtain the sum of all the products of the predicted flow and the predicted concentration over the regular time intervals to represent an estimate of the load. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized regression (rating-curve) approach with additional predictors that capture unique features in the flow data, namely the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and cumulative discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. The model also has the capacity to accommodate autocorrelation in model errors which are the result of intensive sampling during floods. Incorporating this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach using the concentrations of total suspended sediment (TSS) and nitrogen oxide (NOx) and gauged flow data from the Burdekin River, a catchment delivering to the Great Barrier Reef. The sampling biases for NOx concentrations range from 2 to 10 times indicating severe biases. As we expect, the traditional average and extrapolation methods produce much higher estimates than those when bias in sampling is taken into account.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An on-line algorithm is developed for the location of single cross point faults in a PLA (FPLA). The main feature of the algorithm is the determination of a fault set corresponding to the response obtained for a failed test. For the apparently small number of faults in this set, all other tests are generated and a fault table is formed. Subsequently, an adaptive procedure is used to diagnose the fault. Functional equivalence test is carried out to determine the actual fault class if the adaptive testing results in a set of faults with identical tests. The large amount of computation time and storage required in the determination, a priori, of all the fault equivalence classes or in the construction of a fault dictionary are not needed here. A brief study of functional equivalence among the cross point faults is also made.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An apparatus is described that facilitates the determination of incorporation levels of isotope labelled, gaseous precursors into volatile insect-derived metabolites. Atmospheres of varying gas compositions can be generated by evacuation of a working chamber followed by admission of the required levels of component gases, using a precision, digitised pressure read-out system. Insects such as fruit-flies are located initially in a small introduction chamber, from which migration can occur downwards into the working chamber. The level of incorporation of labelled precursors is continuously assayed by the Solid Phase Micro Extraction (SPME) technique and GC-MS analyses. Experiments with both Bactrocera species (fruit-flies) and a parasitoid wasp, Megarhyssa nortoni nortoni (Cresson) and oxygen-18 labelled dioxygen illustrate the utility of this system. The isotope effects of oxygen-18 on the carbon-13 NMR spectra of 1,7- dioxaspiro[5,5]undecane are also described.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With many innovations in process technology, forging is establishing itself as a precision manufacturing process: as forging is used to produce complex shapes in difficult materials, it requires dies of complex configuration of high strength and of wear-resistant materials. Extensive research and development work is being undertaken, internationally, to analyse the stresses in forging dies and the flow of material in forged components. Identification of the location, size and shape of dead-metal zones is required for component design. Further, knowledge of the strain distribution in the flowing metal indicates the degree to which the component is being work hardened. Such information is helpful in the selection of process parameters such as dimensional allowances and interface lubrication, as well as in the determination of post-forging operations such as heat treatment and machining. In the presently reported work the effect of aperture width and initial specimen height on the strain distribution in the plane-strain extrusion forging of machined lead billets is observed: the distortion of grids inscribed on the face of the specimen gives the strain distribution. The stress-equilibrium approach is used to optimise a model of flow in extrusion forging, which model is found to be effective in estimating the size of the dead-metal zone. The work carried out so far indicates that the methodology of using the stress-equilibrium approach to develop models of flow in closed-die forging can be a useful tool in component, process and die design.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Transmission loss of a rectangular expansion chamber, the inlet and outlet of which are situated at arbitrary locations of the chamber, i.e., the side wall or the face of the chamber, are analyzed here based on the Green's function of a rectangular cavity with homogeneous boundary conditions. The rectangular chamber Green's function is expressed in terms of a finite number of rigid rectangular cavity mode shapes. The inlet and outlet ports are modeled as uniform velocity pistons. If the size of the piston is small compared to wavelength, then the plane wave excitation is a valid assumption. The velocity potential inside the chamber is expressed by superimposing the velocity potentials of two different configurations. The first configuration is a piston source at the inlet port and a rigid termination at the outlet, and the second one is a piston at the outlet with a rigid termination at the inlet. Pressure inside the chamber is derived from velocity potentials using linear momentum equation. The average pressure acting on the pistons at the inlet and outlet locations is estimated by integrating the acoustic pressure over the piston area in the two constituent configurations. The transfer matrix is derived from the average pressure values and thence the transmission loss is calculated. The results are verified against those in the literature where use has been made of modal expansions and also numerical models (FEM fluid). The transfer matrix formulation for yielding wall rectangular chambers has been derived incorporating the structural–acoustic coupling. Parametric studies are conducted for different inlet and outlet configurations, and the various phenomena occurring in the TL curves that cannot be explained by the classical plane wave theory, are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The appropriate frequency and precision for surveys of wildlife populations represent a trade-off between survey cost and the risk of making suboptimal management decisions because of poor survey data. The commercial harvest of kangaroos is primarily regulated through annual quotas set as proportions of absolute estimates of population size. Stochastic models were used to explore the effects of varying precision, survey frequency and harvest rate on the risk of quasiextinction for an arid-zone and a more mesic-zone kangaroo population. Quasiextinction probability increases in a sigmoidal fashion as survey frequency is reduced. The risk is greater in more arid regions and is highly sensitive to harvest rate. An appropriate management regime involves regular surveys in the major harvest areas where harvest rate can be set close to the maximum sustained yield. Outside these areas, survey frequency can be reduced in relatively mesic areas and reduced in arid regions when combined with lowered harvest rates. Relative to other factors, quasiextinction risk is only affected by survey precision (standard error/mean × 100) when it is >50%, partly reflecting the safety of the strategy of harvesting a proportion of a population estimate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Queensland Great Barrier Reef line fishery in Australia is regulated via a range of input and output controls including minimum size limits, daily catch limits and commercial catch quotas. As a result of these measures a substantial proportion of the catch is released or discarded. The fate of these released fish is uncertain, but hook-related mortality can potentially be decreased by using hooks that reduce the rates of injury, bleeding and deep hooking. There is also the potential to reduce the capture of non-target species though gear selectivity. A total of 1053 individual fish representing five target species and three non-target species were caught using six hook types including three hook patterns (non-offset circle, J and offset circle), each in two sizes (small 4/0 or 5/0 and large 8/0). Catch rates for each of the hook patterns and sizes varied between species with no consistent results for target or non-target species. When data for all of the fish species were aggregated there was a trend for larger hooks, J hooks and offset circle hooks to cause a greater number of injuries. Using larger hooks was more likely to result in bleeding, although this trend was not statistically significant. Larger hooks were also more likely to foul-hook fish or hook fish in the eye. There was a reduction in the rates of injuries and bleeding for both target and non-target species when using the smaller hook sizes. For a number of species included in our study the incidence of deep hooking decreased when using non-offset circle hooks, however, these results were not consistent for all species. Our results highlight the variability in hook performance across a range of tropical demersal finfish species. The most obvious conservation benefits for both target and non-target species arise from using smaller sized hooks and non-offset circle hooks. Fishers should be encouraged to use these hook configurations to reduce the potential for post-release mortality of released fish.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An on-line algorithm is developed for the location of single cross point faults in a PLA (FPLA). The main feature of the valgorithm is the determination of a fault set corresponding to the response obtained for a failed test. For the apparently small number of faults in this set, all other tests are generated and a fault table is formed. Subsequently, an adaptive procedure is used to diagnose the fault. Functional equivalence test is carried out to determine the actual fault class if the adaptive testing results in a set of faults with identical tests. The large amount of computation time and storage required in the determination, a priori, of all the fault equivalence classes or in the construction of a fault dictionary are not needed here. A brief study of functional equivalence among the cross point faults is also made.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Biodiversity of sharks in the tropical Indo-Pacific is high, but species-specific information to assist sustainable resource exploitation is scarce. The null hypothesis of population genetic homogeneity was tested for scalloped hammerhead shark (Sphyrna lewini, n=244) and the milkshark (Rhizoprionodon acutus, n=209) from northern and eastern Australia, using nuclear (S. lewini, eight microsatellite loci; R. acutus, six loci) and mitochondrial gene markers (873 base pairs of NADH dehydrogenase subunit 4). We were unable to reject genetic homogeneity for S. lewini, which was as expected based on previous studies of this species. Less expected were similar results for R. acutus, which is more benthic and less vagile than S. lewini. These features are probably driving the genetic break found between Australian and central Indonesian R. acutus (F-statistics; mtDNA, 0.751 to 0.903; microsatellite loci, 0.038 to 0.047). Our results support the spatially-homogeneous management plan for shark species in Queensland, but caution is advised for species yet to be studied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Major effect genes are often used for germplasm identification, for diversity analyses and as selection targets in breeding. To date, only a few morphological characters have been mapped as major effect genes across a range of genetic linkage maps based on different types of molecular markers in sorghum (Sorghum bicolor (L.) Moench). This study aims to integrate all available previously mapped major effect genes onto a complete genome map, linked to the whole genome sequence, allowing sorghum breeders and researchers to link this information to QTL studies and to be aware of the consequences of selection for major genes. This provides new opportunities for breeders to take advantage of readily scorable morphological traits and to develop more effective breeding strategies. We also provide examples of the impact of selection for major effect genes on quantitative traits in sorghum. The concepts described in this paper have particular application to breeding programmes in developing countries where molecular markers are expensive or impossible to access.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The project will produce practical and relevant benchmarks, protocols and recommendations for the adoption of remote sensing technologies for improved in season management and therefore production within the Australian sugar cane industry.