992 resultados para computational statistics


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cyclic nitroxide radicals represent promising alternatives to the iodine-based redox mediator commonly used in dye-sensitized solar cells (DSSCs). To date DSSCs with nitroxide-based redox mediators have achieved energy conversion efficiencies of just over 5 % but efficiencies of over 15 % might be achievable, given an appropriate mediator. The efficacy of the mediator depends upon two main factors: it must reversibly undergo one-electron oxidation and it must possess an oxidation potential in a range of 0.600-0.850 V (vs. a standard hydrogen electrode (SHE) in acetonitrile at 25 °C). Herein, we have examined the effect that structural modifications have on the value of the oxidation potential of cyclic nitroxides as well as the reversibility of the oxidation process. These included alterations to the N-containing skeleton (pyrrolidine, piperidine, isoindoline, azaphenalene, etc.), as well as the introduction of different substituents (alkyl-, methoxy-, amino-, carboxy-, etc.) to the ring. Standard oxidation potentials were calculated using high-level ab initio methodology that was demonstrated to be very accurate (with a mean absolute deviation from experimental values of only 16 mV). An optimal value of 1.45 for the electrostatic scaling factor for UAKS radii in acetonitrile solution was obtained. Established trends in the values of oxidation potentials were used to guide molecular design of stable nitroxides with desired E° ox and a number of compounds were suggested for potential use as enhanced redox mediators in DSSCs. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a commercial environment, it is advantageous to know how long it takes customers to move between different regions, how long they spend in each region, and where they are likely to go as they move from one location to another. Presently, these measures can only be determined manually, or through the use of hardware tags (i.e. RFID). Soft biometrics are characteristics that can be used to describe, but not uniquely identify an individual. They include traits such as height, weight, gender, hair, skin and clothing colour. Unlike traditional biometrics, soft biometrics can be acquired by surveillance cameras at range without any user cooperation. While these traits cannot provide robust authentication, they can be used to provide identification at long range, and aid in object tracking and detection in disjoint camera networks. In this chapter we propose using colour, height and luggage soft biometrics to determine operational statistics relating to how people move through a space. A novel average soft biometric is used to locate people who look distinct, and these people are then detected at various locations within a disjoint camera network to gradually obtain operational statistics

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Queensland University of Technology (QUT) was one of the first universities in Australia to establish an institutional repository. Launched in November 2003, the repository (QUT ePrints) uses the EPrints open source repository software (from Southampton) and has enjoyed the benefit of an institutional deposit mandate since January 2004. Currently (April 2012), the repository holds over 36,000 records, including 17,909 open access publications with another 2,434 publications embargoed but with mediated access enabled via the ‘Request a copy’ button which is a feature of the EPrints software. At QUT, the repository is managed by the library.QUT ePrints (http://eprints.qut.edu.au) The repository is embedded into a number of other systems at QUT including the staff profile system and the University’s research information system. It has also been integrated into a number of critical processes related to Government reporting and research assessment. Internally, senior research administrators often look to the repository for information to assist with decision-making and planning. While some statistics could be drawn from the advanced search feature and the existing download statistics feature, they were rarely at the level of granularity or aggregation required. Getting the information from the ‘back end’ of the repository was very time-consuming for the Library staff. In 2011, the Library funded a project to enhance the range of statistics which would be available from the public interface of QUT ePrints. The repository team conducted a series of focus groups and individual interviews to identify and prioritise functionality requirements for a new statistics ‘dashboard’. The participants included a mix research administrators, early career researchers and senior researchers. The repository team identified a number of business criteria (eg extensible, support available, skills required etc) and then gave each a weighting. After considering all the known options available, five software packages (IRStats, ePrintsStats, AWStats, BIRT and Google Urchin/Analytics) were thoroughly evaluated against a list of 69 criteria to determine which would be most suitable. The evaluation revealed that IRStats was the best fit for our requirements. It was deemed capable of meeting 21 out of the 31 high priority criteria. Consequently, IRStats was implemented as the basis for QUT ePrints’ new statistics dashboards which were launched in Open Access Week, October 2011. Statistics dashboards are now available at four levels; whole-of-repository level, organisational unit level, individual author level and individual item level. The data available includes, cumulative total deposits, time series deposits, deposits by item type, % fulltexts, % open access, cumulative downloads, time series downloads, downloads by item type, author ranking, paper ranking (by downloads), downloader geographic location, domains, internal v external downloads, citation data (from Scopus and Web of Science), most popular search terms, non-search referring websites. The data is displayed in charts, maps and table format. The new statistics dashboards are a great success. Feedback received from staff and students has been very positive. Individual researchers have said that they have found the information to be very useful when compiling a track record. It is now very easy for senior administrators (including the Deputy Vice Chancellor-Research) to compare the full-text deposit rates (i.e. mandate compliance rates) across organisational units. This has led to increased ‘encouragement’ from Heads of School and Deans in relation to the provision of full-text versions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: The management of unruptured aneurysms remains controversial as treatment infers potential significant risk to the currently well patient. The decision to treat is based upon aneurysm location, size and abnormal morphology (e.g. bleb formation). A method to predict bleb formation would thus help stratify patient treatment. Our study aims to investigate possible associations between intra-aneurysmal flow dynamics and bleb formation within intracranial aneurysms. Competing theories on aetiology appear in the literature. Our purpose is to further clarify this issue. Methodology: We recruited data from 3D rotational angiograms (3DRA) of 30 patients with cerebral aneurysms and bleb formation. Models representing aneurysms pre-bleb formation were reconstructed by digitally removing the bleb, then computational fluid dynamics simulations were run on both pre and post bleb models. Pulsatile flow conditions and standard boundary conditions were imposed. Results: Aneurysmal flow structure, impingement regions, wall shear stress magnitude and gradients were produced for all models. Correlation of these parameters with bleb formation was sought. Certain CFD parameters show significant inter patient variability, making statistically significant correlation difficult on the partial data subset obtained currently. Conclusion: CFD models are readily producible from 3DRA data. Preliminary results indicate bleb formation appears to be related to regions of high wall shear stress and direct impingement regions of the aneurysm wall.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article presents a methodology that integrates cumulative plots with probe vehicle data for estimation of travel time statistics (average, quartile) on urban networks. The integration reduces relative deviation among the cumulative plots so that the classical analytical procedure of defining the area between the plots as the total travel time can be applied. For quartile estimation, a slicing technique is proposed. The methodology is validated with real data from Lucerne, Switzerland and it is concluded that the travel time estimates from the proposed methodology are statistically equivalent to the observed values.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many donors, particularly those contemplating a substantial donation, consider whether their donation will be deductible from their taxable income. This motivation is not lost on fundraisers who conduct appeals before the end of the taxation year to capitalise on such desires. The motivation is also not lost on Treasury analysts who perceive the tax deduction as “lost” revenue and wonder if the loss is “efficient” in economic terms. Would it be more efficient for the government to give grants to deserving organisations, rather than permitting donor directed gifts? Better still, what about contracts that lock in the use of the money for a government priority? What place does tax deduction play in influencing a donor to give? Does the size of the gift bear any relationship to the size of the tax deduction? Could an increased level of donations take up an increasing shortfall in government welfare and community infrastructure spending? Despite these questions being asked regularly, little has been rigorously established about the effect of taxation deductions on a donor’s gifts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to summarise a successfully defended doctoral thesis. The main purpose of this paper is to provide a summary of the scope, and main issues raised in the thesis so that readers undertaking studies in the same or connected areas may be aware of current contributions to the topic. The secondary aims are to frame the completed thesis in the context of doctoral-level research in project management as well as offer ideas for further investigation which would serve to extend scientific knowledge on the topic. Design/methodology/approach – Research reported in this paper is based on a quantitative study using inferential statistics aimed at better understanding the actual and potential usage of earned value management (EVM) as applied to external projects under contract. Theories uncovered during the literature review were hypothesized and tested using experiential data collected from 145 EVM practitioners with direct experience on one or more external projects under contract that applied the methodology. Findings – The results of this research suggest that EVM is an effective project management methodology. The principles of EVM were shown to be significant positive predictors of project success on contracted efforts and to be a relatively greater positive predictor of project success when using fixed-price versus cost-plus (CP) type contracts. Moreover, EVM's work-breakdown structure (WBS) utility was shown to positively contribute to the formation of project contracts. The contribution was not significantly different between fixed-price and CP contracted projects, with exceptions in the areas of schedule planning and payment planning. EVM's “S” curve benefited the administration of project contracts. The contribution of the S-curve was not significantly different between fixed-price and CP contracted projects. Furthermore, EVM metrics were shown to also be important contributors to the administration of project contracts. The relative contribution of EVM metrics to projects under fixed-price versus CP contracts was not significantly different, with one exception in the area of evaluating and processing payment requests. Practical implications – These results have important implications for project practitioners, EVM advocates, as well as corporate and governmental policy makers. EVM should be considered for all projects – not only for its positive contribution to project contract development and administration, for its contribution to project success as well, regardless of contract type. Contract type should not be the sole determining factor in the decision whether or not to use EVM. More particularly, the more fixed the contracted project cost, the more the principles of EVM explain the success of the project. The use of EVM mechanics should also be used in all projects regardless of contract type. Payment planning using a WBS should be emphasized in fixed-price contracts using EVM in order to help mitigate performance risk. Schedule planning using a WBS should be emphasized in CP contracts using EVM in order to help mitigate financial risk. Similarly, EVM metrics should be emphasized in fixed-price contracts in evaluating and processing payment requests. Originality/value – This paper provides a summary of cutting-edge research work and a link to the published thesis that researchers can use to help them understand how the research methodology was applied as well as how it can be extended.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a maintenance optimisation method for a multi-state series-parallel system considering economic dependence and state-dependent inspection intervals. The objective function considered in the paper is the average revenue per unit time calculated based on the semi-regenerative theory and the universal generating function (UGF). A new algorithm using the stochastic ordering is also developed in this paper to reduce the search space of maintenance strategies and to enhance the efficiency of optimisation algorithms. A numerical simulation is presented in the study to evaluate the efficiency of the proposed maintenance strategy and optimisation algorithms. The simulation result reveals that maintenance strategies with opportunistic maintenance and state-dependent inspection intervals are more cost-effective when the influence of economic dependence and inspection cost is significant. The study further demonstrates that the optimisation algorithm proposed in this paper has higher computational efficiency than the commonly employed heuristic algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The feasibility of using an in-hardware implementation of a genetic algorithm (GA) to solve the computationally expensive travelling salesman problem (TSP) is explored, especially in regard to hardware resource requirements for problem and population sizes. We investigate via numerical experiments whether a small population size might prove sufficient to obtain reasonable quality solutions for the TSP, thereby permitting relatively resource efficient hardware implementation on field programmable gate arrays (FPGAs). Software experiments on two TSP benchmarks involving 48 and 532 cities were used to explore the extent to which population size can be reduced without compromising solution quality, and results show that a GA allowed to run for a large number of generations with a smaller population size can yield solutions of comparable quality to those obtained using a larger population. This finding is then used to investigate feasible problem sizes on a targeted Virtex-7 vx485T-2 FPGA platform via exploration of hardware resource requirements for memory and data flow operations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Advances in algorithms for approximate sampling from a multivariable target function have led to solutions to challenging statistical inference problems that would otherwise not be considered by the applied scientist. Such sampling algorithms are particularly relevant to Bayesian statistics, since the target function is the posterior distribution of the unobservables given the observables. In this thesis we develop, adapt and apply Bayesian algorithms, whilst addressing substantive applied problems in biology and medicine as well as other applications. For an increasing number of high-impact research problems, the primary models of interest are often sufficiently complex that the likelihood function is computationally intractable. Rather than discard these models in favour of inferior alternatives, a class of Bayesian "likelihoodfree" techniques (often termed approximate Bayesian computation (ABC)) has emerged in the last few years, which avoids direct likelihood computation through repeated sampling of data from the model and comparing observed and simulated summary statistics. In Part I of this thesis we utilise sequential Monte Carlo (SMC) methodology to develop new algorithms for ABC that are more efficient in terms of the number of model simulations required and are almost black-box since very little algorithmic tuning is required. In addition, we address the issue of deriving appropriate summary statistics to use within ABC via a goodness-of-fit statistic and indirect inference. Another important problem in statistics is the design of experiments. That is, how one should select the values of the controllable variables in order to achieve some design goal. The presences of parameter and/or model uncertainty are computational obstacles when designing experiments but can lead to inefficient designs if not accounted for correctly. The Bayesian framework accommodates such uncertainties in a coherent way. If the amount of uncertainty is substantial, it can be of interest to perform adaptive designs in order to accrue information to make better decisions about future design points. This is of particular interest if the data can be collected sequentially. In a sense, the current posterior distribution becomes the new prior distribution for the next design decision. Part II of this thesis creates new algorithms for Bayesian sequential design to accommodate parameter and model uncertainty using SMC. The algorithms are substantially faster than previous approaches allowing the simulation properties of various design utilities to be investigated in a more timely manner. Furthermore the approach offers convenient estimation of Bayesian utilities and other quantities that are particularly relevant in the presence of model uncertainty. Finally, Part III of this thesis tackles a substantive medical problem. A neurological disorder known as motor neuron disease (MND) progressively causes motor neurons to no longer have the ability to innervate the muscle fibres, causing the muscles to eventually waste away. When this occurs the motor unit effectively ‘dies’. There is no cure for MND, and fatality often results from a lack of muscle strength to breathe. The prognosis for many forms of MND (particularly amyotrophic lateral sclerosis (ALS)) is particularly poor, with patients usually only surviving a small number of years after the initial onset of disease. Measuring the progress of diseases of the motor units, such as ALS, is a challenge for clinical neurologists. Motor unit number estimation (MUNE) is an attempt to directly assess underlying motor unit loss rather than indirect techniques such as muscle strength assessment, which generally is unable to detect progressions due to the body’s natural attempts at compensation. Part III of this thesis builds upon a previous Bayesian technique, which develops a sophisticated statistical model that takes into account physiological information about motor unit activation and various sources of uncertainties. More specifically, we develop a more reliable MUNE method by applying marginalisation over latent variables in order to improve the performance of a previously developed reversible jump Markov chain Monte Carlo sampler. We make other subtle changes to the model and algorithm to improve the robustness of the approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the field of face recognition, Sparse Representation (SR) has received considerable attention during the past few years. Most of the relevant literature focuses on holistic descriptors in closed-set identification applications. The underlying assumption in SR-based methods is that each class in the gallery has sufficient samples and the query lies on the subspace spanned by the gallery of the same class. Unfortunately, such assumption is easily violated in the more challenging face verification scenario, where an algorithm is required to determine if two faces (where one or both have not been seen before) belong to the same person. In this paper, we first discuss why previous attempts with SR might not be applicable to verification problems. We then propose an alternative approach to face verification via SR. Specifically, we propose to use explicit SR encoding on local image patches rather than the entire face. The obtained sparse signals are pooled via averaging to form multiple region descriptors, which are then concatenated to form an overall face descriptor. Due to the deliberate loss spatial relations within each region (caused by averaging), the resulting descriptor is robust to misalignment & various image deformations. Within the proposed framework, we evaluate several SR encoding techniques: l1-minimisation, Sparse Autoencoder Neural Network (SANN), and an implicit probabilistic technique based on Gaussian Mixture Models. Thorough experiments on AR, FERET, exYaleB, BANCA and ChokePoint datasets show that the proposed local SR approach obtains considerably better and more robust performance than several previous state-of-the-art holistic SR methods, in both verification and closed-set identification problems. The experiments also show that l1-minimisation based encoding has a considerably higher computational than the other techniques, but leads to higher recognition rates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

CTAC2012 was the 16th biennial Computational Techniques and Applications Conference, and took place at Queensland University of Technology from 23 - 26 September, 2012. The ANZIAM Special Interest Group in Computational Techniques and Applications is responsible for the CTAC meetings, the first of which was held in 1981.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Internet chatrooms are common means of interaction and communications, and they carry valuable information about formal or ad-hoc formation of groups with diverse objectives. This work presents a fully automated surveillance system for data collection and analysis in Internet chatrooms. The system has two components: First, it has an eavesdropping tool which collects statistics on individual (chatter) and chatroom behavior. This data can be used to profile a chatroom and its chatters. Second, it has a computational discovery algorithm based on Singular Value Decomposition (SVD) to locate hidden communities and communication patterns within a chatroom. The eavesdropping tool is used for fine tuning the SVD-based discovery algorithm which can be deployed in real-time and requires no semantic information processing. The evaluation of the system on real data shows that (i) statistical properties of different chatrooms vary significantly, thus profiling is possible, (ii) SVD-based algorithm has up to 70-80% accuracy to discover groups of chatters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motivation: Unravelling the genetic architecture of complex traits requires large amounts of data, sophisticated models and large computational resources. The lack of user-friendly software incorporating all these requisites is delaying progress in the analysis of complex traits. Methods: Linkage disequilibrium and linkage analysis (LDLA) is a high-resolution gene mapping approach based on sophisticated mixed linear models, applicable to any population structure. LDLA can use population history information in addition to pedigree and molecular markers to decompose traits into genetic components. Analyses are distributed in parallel over a large public grid of computers in the UK. Results: We have proven the performance of LDLA with analyses of simulated data. There are real gains in statistical power to detect quantitative trait loci when using historical information compared with traditional linkage analysis. Moreover, the use of a grid of computers significantly increases computational speed, hence allowing analyses that would have been prohibitive on a single computer. © The Author 2009. Published by Oxford University Press. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An algorithm for computing dense correspondences between images of a stereo pair or image sequence is presented. The algorithm can make use of both standard matching metrics and the rank and census filters, two filters based on order statistics which have been applied to the image matching problem. Their advantages include robustness to radiometric distortion and amenability to hardware implementation. Results obtained using both real stereo pairs and a synthetic stereo pair with ground truth were compared. The rank and census filters were shown to significantly improve performance in the case of radiometric distortion. In all cases, the results obtained were comparable to, if not better than, those obtained using standard matching metrics. Furthermore, the rank and census have the additional advantage that their computational overhead is less than these metrics. For all techniques tested, the difference between the results obtained for the synthetic stereo pair, and the ground truth results was small.