968 resultados para informative counting
Resumo:
Spectral CT using a photon counting x-ray detector (PCXD) shows great potential for measuring material composition based on energy dependent x-ray attenuation. Spectral CT is especially suited for imaging with K-edge contrast agents to address the otherwise limited contrast in soft tissues. We have developed a micro-CT system based on a PCXD. This system enables full spectrum CT in which the energy thresholds of the PCXD are swept to sample the full energy spectrum for each detector element and projection angle. Measurements provided by the PCXD, however, are distorted due to undesirable physical eects in the detector and are very noisy due to photon starvation. In this work, we proposed two methods based on machine learning to address the spectral distortion issue and to improve the material decomposition. This rst approach is to model distortions using an articial neural network (ANN) and compensate for the distortion in a statistical reconstruction. The second approach is to directly correct for the distortion in the projections. Both technique can be done as a calibration process where the neural network can be trained using 3D printed phantoms data to learn the distortion model or the correction model of the spectral distortion. This replaces the need for synchrotron measurements required in conventional technique to derive the distortion model parametrically which could be costly and time consuming. The results demonstrate experimental feasibility and potential advantages of ANN-based distortion modeling and correction for more accurate K-edge imaging with a PCXD. Given the computational eciency with which the ANN can be applied to projection data, the proposed scheme can be readily integrated into existing CT reconstruction pipelines.
Resumo:
The current crime decrease is defying traditional criminological theories such as those espoused by Bonger (1916) who researched the relationship between crime and economic conditions and stated that when unemployment rises so does crime. In both the USA and the UK crime has dropped dramatically while unemployment has risen. Both the USA and the UK have been in a deep recession since 2008 but the crime rate has decreased dramatically in both countries. Over the past 20 years it has halved in England and Wales. So how do we explain this phenomenon? Crime is down across the West but more so in Britain (see Figure 1). In England and Wales crime has decreased by 8% in a single year (2013). Vandalism is down by 14% and burglaries and vehicle crime by 11%. The murder rate in the UK is at its lowest since 1978; in 2013, 540 people were killed. Some less serious offences are vanishing too; antisocial behaviour has fallen from just under 4million incidents in 2007-08 to 2.4million. (The Economist 20/4/13). According to the most recent annual results from the Crime Survey for England and Wales (CSEW), crime is at its lowest level since the survey began in 1981; the most recent annual figures from the survey, Latest figures from the CSEW show there were an estimated 7.3 million incidents of crime against households and resident adults (aged 16 and over) in England and Wales for the year ending March 2014. This represents a 14% decrease compared with the previous year’s survey, and is the lowest estimate since the survey began in 1981.
Resumo:
Presentation at the CRIS2016 conference in St Andrews, June 10, 2016
Resumo:
Starting with an evaluator for a language, an abstract machine for the same language can be mechanically derived using successive program transformations. This has relevance to studying both the space and time properties of programs because these can be estimated by counting transitions of the abstract machine and measuring the size of the additional data structures needed, such as environments and stacks. In this article we use this process to derive a function that accurately counts the number of steps required to evaluate expressions in a simple language.
Resumo:
Rainflow counting methods convert a complex load time history into a set of load reversals for use in fatigue damage modeling. Rainflow counting methods were originally developed to assess fatigue damage associated with mechanical cycling where creep of the material under load was not considered to be a significant contributor to failure. However, creep is a significant factor in some cyclic loading cases such as solder interconnects under temperature cycling. In this case, fatigue life models require the dwell time to account for stress relaxation and creep. This study develops a new version of the multi-parameter rainflow counting algorithm that provides a range-based dwell time estimation for use with time-dependent fatigue damage models. To show the applicability, the method is used to calculate the life of solder joints under a complex thermal cycling regime and is verified by experimental testing. An additional algorithm is developed in this study to provide data reduction in the results of the rainflow counting. This algorithm uses a damage model and a statistical test to determine which of the resultant cycles are statistically insignificant to a given confidence level. This makes the resulting data file to be smaller, and for a simplified load history to be reconstructed.
Resumo:
Proliferation of microglial cells has been considered a sign of glial activation and a hallmark of ongoing neurodegenerative diseases. Microglia activation is analyzed in animal models of different eye diseases. Numerous retinal samples are required for each of these studies to obtain relevant data of statistical significance. Because manual quantification of microglial cells is time consuming, the aim of this study was develop an algorithm for automatic identification of retinal microglia. Two groups of adult male Swiss mice were used: age-matched controls (naïve, n = 6) and mice subjected to unilateral laser-induced ocular hypertension (lasered; n = 9). In the latter group, both hypertensive eyes and contralateral untreated retinas were analyzed. Retinal whole mounts were immunostained with anti Iba-1 for detecting microglial cell populations. A new algorithm was developed in MATLAB for microglial quantification; it enabled the quantification of microglial cells in the inner and outer plexiform layers and evaluates the area of the retina occupied by Iba-1+ microglia in the nerve fiber-ganglion cell layer. The automatic method was applied to a set of 6,000 images. To validate the algorithm, mouse retinas were evaluated both manually and computationally; the program correctly assessed the number of cells (Pearson correlation R = 0.94 and R = 0.98 for the inner and outer plexiform layers respectively). Statistically significant differences in glial cell number were found between naïve, lasered eyes and contralateral eyes (P<0.05, naïve versus contralateral eyes; P<0.001, naïve versus lasered eyes and contralateral versus lasered eyes). The algorithm developed is a reliable and fast tool that can evaluate the number of microglial cells in naïve mouse retinas and in retinas exhibiting proliferation. The implementation of this new automatic method can enable faster quantification of microglial cells in retinal pathologies.
Resumo:
2016
Resumo:
2016
Resumo:
Size distributions of expiratory droplets expelled during coughing and speaking and the velocities of the expiration air jets of healthy volunteers were measured. Droplet size was measured using the Interferometric Mie imaging (IMI) technique while the Particle Image Velocimetry (PIV) technique was used for measuring air velocity. These techniques allowed measurements in close proximity to the mouth and avoided air sampling losses. The average expiration air velocity was 11.7 m/s for coughing and 3.9 m/s for speaking. Under the experimental setting, evaporation and condensation effects had negligible impact on the measured droplet size. The geometric mean diameter of droplets from coughing was 13.5m and it was 16.0m for speaking (counting 1 to 100). The estimated total number of droplets expelled ranged from 947 – 2085 per cough and 112 – 6720 for speaking. The estimated droplet concentrations for coughing ranged from 2.4 - 5.2cm-3 per cough and 0.004 – 0.223 cm-3 for speaking.
Resumo:
Background The problem of silent multiple comparisons is one of the most difficult statistical problems faced by scientists. It is a particular problem for investigating a one-off cancer cluster reported to a health department because any one of hundreds, or possibly thousands, of neighbourhoods, schools, or workplaces could have reported a cluster, which could have been for any one of several types of cancer or any one of several time periods. Methods This paper contrasts the frequentist approach with a Bayesian approach for dealing with silent multiple comparisons in the context of a one-off cluster reported to a health department. Two published cluster investigations were re-analysed using the Dunn-Sidak method to adjust frequentist p-values and confidence intervals for silent multiple comparisons. Bayesian methods were based on the Gamma distribution. Results Bayesian analysis with non-informative priors produced results similar to the frequentist analysis, and suggested that both clusters represented a statistical excess. In the frequentist framework, the statistical significance of both clusters was extremely sensitive to the number of silent multiple comparisons, which can only ever be a subjective "guesstimate". The Bayesian approach is also subjective: whether there is an apparent statistical excess depends on the specified prior. Conclusion In cluster investigations, the frequentist approach is just as subjective as the Bayesian approach, but the Bayesian approach is less ambitious in that it treats the analysis as a synthesis of data and personal judgements (possibly poor ones), rather than objective reality. Bayesian analysis is (arguably) a useful tool to support complicated decision-making, because it makes the uncertainty associated with silent multiple comparisons explicit.
Resumo:
This document provides a review of international and national practices in investment decision support tools in road asset management. Efforts were concentrated on identifying analytic frameworks, evaluation methodologies and criteria adopted by current tools. Emphasis was also given to how current approaches support Triple Bottom Line decision-making. Benefit Cost Analysis and Multiple Criteria Analysis are principle methodologies in supporting decision-making in Road Asset Management. The complexity of the applications shows significant differences in international practices. There is continuing discussion amongst practitioners and researchers regarding to which one is more appropriate in supporting decision-making. It is suggested that the two approaches should be regarded as complementary instead of competitive means. Multiple Criteria Analysis may be particularly helpful in early stages of project development, say strategic planning. Benefit Cost Analysis is used most widely for project prioritisation and selecting the final project from amongst a set of alternatives. Benefit Cost Analysis approach is useful tool for investment decision-making from an economic perspective. An extension of the approach, which includes social and environmental externalities, is currently used in supporting Triple Bottom Line decision-making in the road sector. However, efforts should be given to several issues in the applications. First of all, there is a need to reach a degree of commonality on considering social and environmental externalities, which may be achieved by aggregating the best practices. At different decision-making level, the detail of consideration of the externalities should be different. It is intended to develop a generic framework to coordinate the range of existing practices. The standard framework will also be helpful in reducing double counting, which appears in some current practices. Cautions should also be given to the methods of determining the value of social and environmental externalities. A number of methods, such as market price, resource costs and Willingness to Pay, are found in the review. The use of unreasonable monetisation methods in some cases has discredited Benefit Cost Analysis in the eyes of decision makers and the public. Some social externalities, such as employment and regional economic impacts, are generally omitted in current practices. This is due to the lack of information and credible models. It may be appropriate to consider these externalities in qualitative forms in a Multiple Criteria Analysis. Consensus has been reached in considering noise and air pollution in international practices. However, Australia practices generally omitted these externalities. Equity is an important consideration in Road Asset Management. The considerations are either between regions, or social groups, such as income, age, gender, disable, etc. In current practice, there is not a well developed quantitative measure for equity issues. More research is needed to target this issue. Although Multiple Criteria Analysis has been used for decades, there is not a generally accepted framework in the choice of modelling methods and various externalities. The result is that different analysts are unlikely to reach consistent conclusions about a policy measure. In current practices, some favour using methods which are able to prioritise alternatives, such as Goal Programming, Goal Achievement Matrix, Analytic Hierarchy Process. The others just present various impacts to decision-makers to characterise the projects. Weighting and scoring system are critical in most Multiple Criteria Analysis. However, the processes of assessing weights and scores were criticised as highly arbitrary and subjective. It is essential that the process should be as transparent as possible. Obtaining weights and scores by consulting local communities is a common practice, but is likely to result in bias towards local interests. Interactive approach has the advantage in helping decision-makers elaborating their preferences. However, computation burden may result in lose of interests of decision-makers during the solution process of a large-scale problem, say a large state road network. Current practices tend to use cardinal or ordinal scales in measure in non-monetised externalities. Distorted valuations can occur where variables measured in physical units, are converted to scales. For example, decibels of noise converts to a scale of -4 to +4 with a linear transformation, the difference between 3 and 4 represents a far greater increase in discomfort to people than the increase from 0 to 1. It is suggested to assign different weights to individual score. Due to overlapped goals, the problem of double counting also appears in some of Multiple Criteria Analysis. The situation can be improved by carefully selecting and defining investment goals and criteria. Other issues, such as the treatment of time effect, incorporating risk and uncertainty, have been given scant attention in current practices. This report suggested establishing a common analytic framework to deal with these issues.
Resumo:
The following technical report describes the approach and algorithm used to detect marine mammals from aerial imagery taken from manned/unmanned platform. The aim is to automate the process of counting the population of dugongs and other mammals. We have developed and algorithm that automatically presents to a user a number of possible candidates of these mammals. We tested the algorithm in two distinct datasets taken from different altitudes. Analysis and discussion is presented in regards with the complexity of the input datasets, the detection performance.
Resumo:
In this article we examine how a consumer's susceptibility to informative influence (SII) affects the effectiveness of consumer testimonials in print advertising. More specifically, we show that consumers that are high in SII and that seek consumption-relevant information from other people are more influenced by the strength of the testimonial information than the strength of the attribute information. Conversely, consumers low in SII place greater emphasis on the strength of the attribute information when forming their evaluations. Our results show that consumer psychological traits can have an important impact on the acceptance of testimonial advertising. Theoretical and managerial implications of our findings are discussed.