940 resultados para APPROXIMATE ENTROPY
Resumo:
Although accelerometers are extensively used for assessing gait, limited research has evaluated the concurrent validity of these devices on less predictable walking surfaces or the comparability of different methods used for gravitational acceleration compensation. This study evaluated the concurrent validity of trunk accelerations derived from a tri-axial inertial measurement unit while walking on firm, compliant and uneven surfaces and contrasted two methods used to remove gravitational accelerations: i) subtraction of the best linear fit from the data (detrending), and; ii) use of orientation information (quaternions) from the inertial measurement unit. Twelve older and twelve younger adults walked at their preferred speed along firm, compliant and uneven walkways. Accelerations were evaluated for the thoracic spine (T12) using a tri-axial inertial measurement unit and an eleven-camera Vicon system. The findings demonstrated excellent agreement between accelerations derived from the inertial measurement unit and motion analysis system, including while walking on uneven surfaces that better approximate a real-world setting (all differences <0.16 m.s−2). Detrending produced slightly better agreement between the inertial measurement unit and Vicon system on firm surfaces (delta range: −0.05 to 0.06 vs. 0.00 to 0.14 m.s−2), whereas the quaternion method performed better when walking on compliant and uneven walkways (delta range: −0.16 to −0.02 vs. −0.07 to 0.07 m.s−2). The technique used to compensate for gravitational accelerations requires consideration in future research, particularly when walking on compliant and uneven surfaces. These findings demonstrate trunk accelerations can be accurately measured using a wireless inertial measurement unit and are appropriate for research that evaluates healthy populations in complex environments.
Resumo:
The cotton strip assay (CSA) is an established technique for measuring soil microbial activity. The technique involves burying cotton strips and measuring their tensile strength after a certain time. This gives a measure of the rotting rate, R, of the cotton strips. R is then a measure of soil microbial activity. This paper examines properties of the technique and indicates how the assay can be optimised. Humidity conditioning of the cotton strips before measuring their tensile strength reduced the within and between day variance and enabled the distribution of the tensile strength measurements to approximate normality. The test data came from a three-way factorial experiment (two soils, two temperatures, three moisture levels). The cotton strips were buried in the soil for intervals of time ranging up to 6 weeks. This enabled the rate of loss of cotton tensile strength with time to be studied under a range of conditions. An inverse cubic model accounted for greater than 90% of the total variation within each treatment combination. This offers support for summarising the decomposition process by a single parameter R. The approximate variance of the decomposition rate was estimated from a function incorporating the variance of tensile strength and the differential of the function for the rate of decomposition, R, with respect to tensile strength. This variance function has a minimum when the measured strength is approximately 2/3 that of the original strength. The estimates of R are almost unbiased and relatively robust against the cotton strips being left in the soil for more or less than the optimal time. We conclude that the rotting rate X should be measured using the inverse cubic equation, and that the cotton strips should be left in the soil until their strength has been reduced to about 2/3.
Resumo:
Time series classification has been extensively explored in many fields of study. Most methods are based on the historical or current information extracted from data. However, if interest is in a specific future time period, methods that directly relate to forecasts of time series are much more appropriate. An approach to time series classification is proposed based on a polarization measure of forecast densities of time series. By fitting autoregressive models, forecast replicates of each time series are obtained via the bias-corrected bootstrap, and a stationarity correction is considered when necessary. Kernel estimators are then employed to approximate forecast densities, and discrepancies of forecast densities of pairs of time series are estimated by a polarization measure, which evaluates the extent to which two densities overlap. Following the distributional properties of the polarization measure, a discriminant rule and a clustering method are proposed to conduct the supervised and unsupervised classification, respectively. The proposed methodology is applied to both simulated and real data sets, and the results show desirable properties.
Resumo:
This paper addresses the problem of determining optimal designs for biological process models with intractable likelihoods, with the goal of parameter inference. The Bayesian approach is to choose a design that maximises the mean of a utility, and the utility is a function of the posterior distribution. Therefore, its estimation requires likelihood evaluations. However, many problems in experimental design involve models with intractable likelihoods, that is, likelihoods that are neither analytic nor can be computed in a reasonable amount of time. We propose a novel solution using indirect inference (II), a well established method in the literature, and the Markov chain Monte Carlo (MCMC) algorithm of Müller et al. (2004). Indirect inference employs an auxiliary model with a tractable likelihood in conjunction with the generative model, the assumed true model of interest, which has an intractable likelihood. Our approach is to estimate a map between the parameters of the generative and auxiliary models, using simulations from the generative model. An II posterior distribution is formed to expedite utility estimation. We also present a modification to the utility that allows the Müller algorithm to sample from a substantially sharpened utility surface, with little computational effort. Unlike competing methods, the II approach can handle complex design problems for models with intractable likelihoods on a continuous design space, with possible extension to many observations. The methodology is demonstrated using two stochastic models; a simple tractable death process used to validate the approach, and a motivating stochastic model for the population evolution of macroparasites.
Resumo:
The mineral kidwellite, a hydrated hydroxy phosphate of ferric iron and sodium of approximate formula NaFe93+(PO4)6(OH)11⋅3H2O, has been studied using a combination of electron microscopy with EDX and vibrational spectroscopic techniques. Raman spectroscopy identifies an intense band at 978 cm−1 and 1014 cm−1. These bands are attributed to the PO43− ν1 symmetric stretching mode. The ν3 antisymmetric stretching modes are observed by a large number of Raman bands. The series of Raman bands at 1034, 1050, 1063, 1082, 1129, 1144 and 1188 cm−1 are attributed to the ν3 antisymmetric stretching bands of the PO43− and HOPO32− units. The observation of these multiple Raman bands in the symmetric and antisymmetric stretching region gives credence to the concept that both phosphate and hydrogen phosphate units exist in the structure of kidwellite. The series of Raman bands at 557, 570, 588, 602, 631, 644 and 653 cm−1are assigned to the PO43− ν2 bending modes. The series of Raman bands at 405, 444, 453, 467, 490 and 500 cm−1 are attributed to the PO43− and HOPO32− ν4 bending modes. The spectrum is quite broad but Raman bands may be resolved at 3122, 3231, 3356, 3466 and 3580 cm−1. These bands are assigned to water stretching vibrational modes. The number and position of these bands suggests that water is in different molecular environments with differing hydrogen bond distances. Infrared bands at 3511 and 3359 cm−1 are ascribed to the OH stretching vibration of the OH units. Very broad bands at 3022 and 3299 cm−1 are attributed to the OH stretching vibrations of water. Vibrational spectroscopy offers insights into the molecular structure of the phosphate mineral kidwellite.
Resumo:
This study demonstrates a novel method for testing the hypothesis that variations in primary and secondary particle number concentration (PNC) in urban air are related to residual fuel oil combustion at a coastal port lying 30 km upwind, by examining the correlation between PNC and airborne particle composition signatures chosen for their sensitivity to the elemental contaminants present in residual fuel oil. Residual fuel oil combustion indicators were chosen by comparing the sensitivity of a range of concentration ratios to airborne emissions originating from the port. The most responsive were combinations of vanadium and sulfur concentration ([S], [V]) expressed as ratios with respect to black carbon concentration ([BC]). These correlated significantly with ship activity at the port and with the fraction of time during which the wind blew from the port. The average [V] when the wind was predominantly from the port was 0.52 ng.m-3 (87%) higher than the average for all wind directions and 0.83 ng.m-3 (280%) higher than that for the lowest vanadium yielding wind direction considered to approximate the natural background. Shipping was found to be the main source of V impacting urban air quality in Brisbane. However, contrary to the stated hypothesis, increases in PNC related measures did not correlate with ship emission indicators or ship traffic. Hence at this site ship emissions were not found to be a major contributor to PNC compared to other fossil fuel combustion sources such as road traffic, airport and refinery emissions.
Resumo:
Although urbanization can promote social and economic development, it can also cause various problems. As the key decision makers of urbanization, local governments should be able to evaluate urbanization performance, summarize experiences, and find problems caused by urbanization. This paper introduces a hybrid Entropy–McKinsey Matrix method for evaluating sustainable urbanization. The McKinsey Matrix is commonly referred to as the GE Matrix. The values of a development index (DI) and coordination index (CI) are calculated by employing the Entropy method and are used as a basis for constructing a GE Matrix. The matrix can assist in assessing sustainable urbanization performance by locating the urbanization state point. A case study of the city of Jinan in China demonstrates the process of using the evaluation method. The case study reveals that the method is an effective tool in helping policy makers understand the performance of urban sustainability and therefore formulate suitable strategies for guiding urbanization toward better sustainability.
Resumo:
The major structural components of HIV are synthesized as a 55-kDa polyprotein, Gag. Particle formation is driven by the self-assembly of Gag into a curved hexameric lattice, the structure of which is poorly understood. We used cryoelectron tomography and contrast-transfer-function corrected subtomogram averaging to study the structure of the assembled immature Gag lattice to approximate to 17-angstrom resolution. Gag is arranged in the immature virus as a single, continuous, but incomplete hexameric lattice whose curvature is mediated without a requirement for pentameric defects. The resolution of the structure allows positioning of individual protein domains. High-resolution crystal structures were fitted into the reconstruction to locate protein-protein interfaces involved in Gag assembly, and to identify the structural transformations associated with virus maturation. The results of this study suggest a concept for the formation of nonsymmetrical enveloped viruses of variable sizes.
Resumo:
Bayesian experimental design is a fast growing area of research with many real-world applications. As computational power has increased over the years, so has the development of simulation-based design methods, which involve a number of algorithms, such as Markov chain Monte Carlo, sequential Monte Carlo and approximate Bayes methods, facilitating more complex design problems to be solved. The Bayesian framework provides a unified approach for incorporating prior information and/or uncertainties regarding the statistical model with a utility function which describes the experimental aims. In this paper, we provide a general overview on the concepts involved in Bayesian experimental design, and focus on describing some of the more commonly used Bayesian utility functions and methods for their estimation, as well as a number of algorithms that are used to search over the design space to find the Bayesian optimal design. We also discuss other computational strategies for further research in Bayesian optimal design.
Resumo:
Objective To evaluate methods for monitoring monthly aggregated hospital adverse event data that display clustering, non-linear trends and possible autocorrelation. Design Retrospective audit. Setting The Northern Hospital, Melbourne, Australia. Participants 171,059 patients admitted between January 2001 and December 2006. Measurements The analysis is illustrated with 72 months of patient fall injury data using a modified Shewhart U control chart, and charts derived from a quasi-Poisson generalised linear model (GLM) and a generalised additive mixed model (GAMM) that included an approximate upper control limit. Results The data were overdispersed and displayed a downward trend and possible autocorrelation. The downward trend was followed by a predictable period after December 2003. The GLM-estimated incidence rate ratio was 0.98 (95% CI 0.98 to 0.99) per month. The GAMM-fitted count fell from 12.67 (95% CI 10.05 to 15.97) in January 2001 to 5.23 (95% CI 3.82 to 7.15) in December 2006 (p<0.001). The corresponding values for the GLM were 11.9 and 3.94. Residual plots suggested that the GLM underestimated the rate at the beginning and end of the series and overestimated it in the middle. The data suggested a more rapid rate fall before 2004 and a steady state thereafter, a pattern reflected in the GAMM chart. The approximate upper two-sigma equivalent control limit in the GLM and GAMM charts identified 2 months that showed possible special-cause variation. Conclusion Charts based on GAMM analysis are a suitable alternative to Shewhart U control charts with these data.
Resumo:
This paper presents our system to address the CogALex-IV 2014 shared task of identifying a single word most semantically related to a group of 5 words (queries). Our system uses an implementation of a neural language model and identifies the answer word by finding the most semantically similar word representation to the sum of the query representations. It is a fully unsupervised system which learns on around 20% of the UkWaC corpus. It correctly identifies 85 exact correct targets out of 2,000 queries, 285 approximate targets in lists of 5 suggestions.
Resumo:
The US Clean Air Act Amendments introduce an emissions trading system to regulate SO2 emissions. This study finds that changes in SO2 emissions prices are related to innovations induced by these amendments. We find that electricity-generating plants are able to increase electricity output and reduce emissions of SO2 and NOx from 1995 to 2007 due to the introduction of the allowance trading system. However, compared to the approximate 8% per year of exogenous technological progress, the induced effect is relatively small, and the contribution of the induced effect to overall technological progress is about 1-2%.
Resumo:
Age-related Macular Degeneration (AMD) is one of the major causes of vision loss and blindness in ageing population. Currently, there is no cure for AMD, however early detection and subsequent treatment may prevent the severe vision loss or slow the progression of the disease. AMD can be classified into two types: dry and wet AMDs. The people with macular degeneration are mostly affected by dry AMD. Early symptoms of AMD are formation of drusen and yellow pigmentation. These lesions are identified by manual inspection of fundus images by the ophthalmologists. It is a time consuming, tiresome process, and hence an automated diagnosis of AMD screening tool can aid clinicians in their diagnosis significantly. This study proposes an automated dry AMD detection system using various entropies (Shannon, Kapur, Renyi and Yager), Higher Order Spectra (HOS) bispectra features, Fractional Dimension (FD), and Gabor wavelet features extracted from greyscale fundus images. The features are ranked using t-test, Kullback–Lieber Divergence (KLD), Chernoff Bound and Bhattacharyya Distance (CBBD), Receiver Operating Characteristics (ROC) curve-based and Wilcoxon ranking methods in order to select optimum features and classified into normal and AMD classes using Naive Bayes (NB), k-Nearest Neighbour (k-NN), Probabilistic Neural Network (PNN), Decision Tree (DT) and Support Vector Machine (SVM) classifiers. The performance of the proposed system is evaluated using private (Kasturba Medical Hospital, Manipal, India), Automated Retinal Image Analysis (ARIA) and STructured Analysis of the Retina (STARE) datasets. The proposed system yielded the highest average classification accuracies of 90.19%, 95.07% and 95% with 42, 54 and 38 optimal ranked features using SVM classifier for private, ARIA and STARE datasets respectively. This automated AMD detection system can be used for mass fundus image screening and aid clinicians by making better use of their expertise on selected images that require further examination.
Resumo:
Age-related macular degeneration (AMD) affects the central vision and subsequently may lead to visual loss in people over 60 years of age. There is no permanent cure for AMD, but early detection and successive treatment may improve the visual acuity. AMD is mainly classified into dry and wet type; however, dry AMD is more common in aging population. AMD is characterized by drusen, yellow pigmentation, and neovascularization. These lesions are examined through visual inspection of retinal fundus images by ophthalmologists. It is laborious, time-consuming, and resource-intensive. Hence, in this study, we have proposed an automated AMD detection system using discrete wavelet transform (DWT) and feature ranking strategies. The first four-order statistical moments (mean, variance, skewness, and kurtosis), energy, entropy, and Gini index-based features are extracted from DWT coefficients. We have used five (t test, Kullback–Lieber Divergence (KLD), Chernoff Bound and Bhattacharyya Distance, receiver operating characteristics curve-based, and Wilcoxon) feature ranking strategies to identify optimal feature set. A set of supervised classifiers namely support vector machine (SVM), decision tree, k -nearest neighbor ( k -NN), Naive Bayes, and probabilistic neural network were used to evaluate the highest performance measure using minimum number of features in classifying normal and dry AMD classes. The proposed framework obtained an average accuracy of 93.70 %, sensitivity of 91.11 %, and specificity of 96.30 % using KLD ranking and SVM classifier. We have also formulated an AMD Risk Index using selected features to classify the normal and dry AMD classes using one number. The proposed system can be used to assist the clinicians and also for mass AMD screening programs.
Resumo:
Heliothine moths (Lepidoptera: Heliothinae) include some of the world's most devastating pest species. Whereas the majority of nonpest heliothinae specialize on a single plant family, genus, or species, pest species are highly polyphagous, with populations often escalating in size as they move from one crop species to another. Here, we examine the current literature on heliothine host-selection behavior with the aim of providing a knowledge base for research scientists and pest managers. We review the host relations of pest heliothines, with a particular focus on Helicoverpa armigera (Hubner), the most economically damaging of all heliothine species. We then consider the important question of what constitutes a host plant in these moths, and some of the problems that arise when trying to determine host plant status from empirical studies on host use. The top six host plant families in the two main Australian pest species (H. armigera and Helicoverpa punctigera Wallengren) are the same and the top three (Asteraceae, Fabaceae, and Malvaceae) are ranked the same (in terms of the number of host species on which eggs or larvae have been identified), suggesting that these species may use similar cues to identify their hosts. In contrast, for the two key pest heliothines in the Americas, the Fabaceae contains approximate to 1/3 of hosts for both. For Helicoverpa zea (Boddie), the remaining hosts are more evenly distributed, with Solanaceae next, followed by Poaceae, Asteraceae, Malvaceae, and Rosaceae. For Heliothis virescens (F.), the next highest five families are Malvaceae, Asteraceae, Solanaceae, Convolvulaceae, and Scrophulariaceae. Again there is considerable overlap in host use at generic and even species level. H. armigera is the most widely distributed and recorded from 68 plant families worldwide, but only 14 families are recorded as a containing a host in all geographic areas. A few crop hosts are used throughout the range as expected, but in some cases there are anomalies, perhaps because host plant relation studies are not comparable. Studies on the attraction of heliothines to plant odors are examined in the context of our current understanding of insect olfaction, with the aim of better understanding the connection between odor perception and host choice. Finally, we discuss research into sustainable management of pest heliothines using knowledge of heliothine behavior and ecology. A coordinated international research effort is needed to advance our knowledge on host relations in widely distributed polyphagous species instead of the localized, piecemeal approaches to understanding these insects that has been the norm to date.