986 resultados para sample complexity


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, novel methodologies for the determination of antioxidative compounds in herbs and beverages were developed. Antioxidants are compounds that can reduce, delay or inhibit oxidative events. They are a part of the human defense system and are obtained through the diet. Antioxidants are naturally present in several types of foods, e.g. in fruits, beverages, vegetables and herbs. Antioxidants can also be added to foods during manufacturing to suppress lipid oxidation and formation of free radicals under conditions of cooking or storage and to reduce the concentration of free radicals in vivo after food ingestion. There is growing interest in natural antioxidants, and effective compounds have already been identified from antioxidant classes such as carotenoids, essential oils, flavonoids and phenolic acids. The wide variety of sample matrices and analytes presents quite a challenge for the development of analytical techniques. Growing demands have been placed on sample pretreatment. In this study, three novel extraction techniques, namely supercritical fluid extraction (SFE), pressurised hot water extraction (PHWE) and dynamic sonication-assisted extraction (DSAE) were studied. SFE was used for the extraction of lycopene from tomato skins and PHWE was used in the extraction of phenolic compounds from sage. DSAE was applied to the extraction of phenolic acids from Lamiaceae herbs. In the development of extraction methodologies, the main parameters of the extraction were studied and the recoveries were compared to those achieved by conventional extraction techniques. In addition, the stability of lycopene was also followed under different storage conditions. For the separation of the antioxidative compounds in the extracts, liquid chromatographic methods (LC) were utilised. Two novel LC techniques, namely ultra performance liquid chromatography (UPLC) and comprehensive two-dimensional liquid chromatography (LCxLC) were studied and compared with conventional high performance liquid chromatography (HPLC) for the separation of antioxidants in beverages and Lamiaceae herbs. In LCxLC, the selection of LC mode, column dimensions and flow rates were studied and optimised to obtain efficient separation of the target compounds. In addition, the separation powers of HPLC, UPLC, HPLCxHPLC and HPLCxUPLC were compared. To exploit the benefits of an integrated system, in which sample preparation and final separation are performed in a closed unit, dynamic sonication-assisted extraction was coupled on-line to a liquid chromatograph via a solid-phase trap. The increased sensitivity was utilised in the extraction of phenolic acids from Lamiaceae herbs. The results were compared to those of achieved by the LCxLC system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sampling design is critical to the quality of quantitative research, yet it does not always receive appropriate attention in nursing research. The current article details how balancing probability techniques with practical considerations produced a representative sample of Australian nursing homes (NHs). Budgetary, logistical, and statistical constraints were managed by excluding some NHs (e.g., those too difficult to access) from the sampling frame; a stratified, random sampling methodology yielded a final sample of 53 NHs from a population of 2,774. In testing the adequacy of representation of the study population, chi-square tests for goodness of fit generated nonsignificant results for distribution by distance from major city and type of organization. A significant result for state/territory was expected and was easily corrected for by the application of weights. The current article provides recommendations for conducting high-quality, probability-based samples and stresses the importance of testing the representativeness of achieved samples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For the consumer, flavor is arguably the most important aspect of a good coffee. Coffee flavor is extremely complex and arises from numerous chemical, biological and physical influences of cultivar, coffee cherry maturity, geographical growing location, production, processing, roasting and cup preparation. Not surprisingly there is a large volume of research published detailing the volatile and non-volatile compounds in coffee and that are likely to be playing a role in coffee flavor. Further, there is much published on the sensory properties of coffee. Nevertheless, the link between flavor components and the sensory properties expressed in the complex matrix of coffee is yet to be fully understood. This paper provides an overview of the chemical components that are thought to be involved in the flavor and sensory quality of Arabica coffee.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Minimum Description Length (MDL) is an information-theoretic principle that can be used for model selection and other statistical inference tasks. There are various ways to use the principle in practice. One theoretically valid way is to use the normalized maximum likelihood (NML) criterion. Due to computational difficulties, this approach has not been used very often. This thesis presents efficient floating-point algorithms that make it possible to compute the NML for multinomial, Naive Bayes and Bayesian forest models. None of the presented algorithms rely on asymptotic analysis and with the first two model classes we also discuss how to compute exact rational number solutions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Minimum Description Length (MDL) principle is a general, well-founded theoretical formalization of statistical modeling. The most important notion of MDL is the stochastic complexity, which can be interpreted as the shortest description length of a given sample of data relative to a model class. The exact definition of the stochastic complexity has gone through several evolutionary steps. The latest instantation is based on the so-called Normalized Maximum Likelihood (NML) distribution which has been shown to possess several important theoretical properties. However, the applications of this modern version of the MDL have been quite rare because of computational complexity problems, i.e., for discrete data, the definition of NML involves an exponential sum, and in the case of continuous data, a multi-dimensional integral usually infeasible to evaluate or even approximate accurately. In this doctoral dissertation, we present mathematical techniques for computing NML efficiently for some model families involving discrete data. We also show how these techniques can be used to apply MDL in two practical applications: histogram density estimation and clustering of multi-dimensional data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Matrix decompositions, where a given matrix is represented as a product of two other matrices, are regularly used in data mining. Most matrix decompositions have their roots in linear algebra, but the needs of data mining are not always those of linear algebra. In data mining one needs to have results that are interpretable -- and what is considered interpretable in data mining can be very different to what is considered interpretable in linear algebra. --- The purpose of this thesis is to study matrix decompositions that directly address the issue of interpretability. An example is a decomposition of binary matrices where the factor matrices are assumed to be binary and the matrix multiplication is Boolean. The restriction to binary factor matrices increases interpretability -- factor matrices are of the same type as the original matrix -- and allows the use of Boolean matrix multiplication, which is often more intuitive than normal matrix multiplication with binary matrices. Also several other decomposition methods are described, and the computational complexity of computing them is studied together with the hardness of approximating the related optimization problems. Based on these studies, algorithms for constructing the decompositions are proposed. Constructing the decompositions turns out to be computationally hard, and the proposed algorithms are mostly based on various heuristics. Nevertheless, the algorithms are shown to be capable of finding good results in empirical experiments conducted with both synthetic and real-world data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The reliable assessment of macrophyte biomass is fundamental for ecological research and management of freshwater ecosystems. While dry mass is routinely used to determine aquatic plant biomass, wet (fresh) mass can be more practical. We tested the accuracy and precision of wet mass measurements by using a salad spinner to remove surface water from four macrophyte species differing in growth form and architectural complexity. The salad spinner aided in making precise and accurate wet mass with less than 3% error. There was also little difference between operators, with a user bias estimated to be below 5%. To achieve this level of precision, only 10–20 turns of the salad spinner are needed. Therefore, wet mass of a sample can be determined in less than 1 min. We demonstrated that a salad spinner is a rapid and economical technique to enable precise and accurate macrophyte wet mass measurements and is particularly suitable for experimental work. The method will also be useful for fieldwork in situations when sample sizes are not overly large.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We develop a two stage split vector quantization method with optimum bit allocation, for achieving minimum computational complexity. This also results in much lower memory requirement than the recently proposed switched split vector quantization method. To improve the rate-distortion performance further, a region specific normalization is introduced, which results in 1 bit/vector improvement over the typical two stage split vector quantizer, for wide-band LSF quantization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present two discriminative language modelling techniques for Lempel-Ziv-Welch (LZW) based LID system. The previous approach to LID using LZW algorithm was to directly use the LZW pattern tables forlanguage modelling. But, since the patterns in a language pattern table are shared by other language pattern tables, confusability prevailed in the LID task. For overcoming this, we present two pruning techniques (i) Language Specific (LS-LZW)-in which patterns common to more than one pattern table are removed. (ii) Length-Frequency product based (LF-LZW)-in which patterns having their length-frequency product below a threshold are removed. These approaches reduce the classification score (Compression Ratio [LZW-CR] or the weighted discriminant score [LZW-WDS]) for non native languages and increases the LID performance considerably. Also the memory and computational requirements of these techniques are much less compared to basic LZW techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rhizoctonia spp. are ubiquitous soil inhabiting fungi that enter into pathogenic or symbiotic associations with plants. In general Rhizoctonia spp. are regarded as plant pathogenic fungi and many cause root rot and other plant diseases which results in considerable economic losses both in agriculture and forestry. Many Rhizoctonia strains enter into symbiotic mycorrhizal associations with orchids and some hypovirulent strains are promising biocontrol candidates in preventing host plant infection by pathogenic Rhizoctonia strains. This work focuses on uni- and binucleate Rhizoctonia (respectively UNR and BNR) strains belonging to the teleomorphic genus Ceratobasidium, but multinucleate Rhizoctonia (MNR) belonging to teleomorphic genus Thanatephorus and ectomycorrhizal fungal species, such as Suillus bovinus, were also included in DNA probe development work. Strain specific probes were developed to target rDNA ITS (internal transcribed spacer) sequences (ITS1, 5.8S and ITS2) and applied in Southern dot blot and liquid hybridization assays. Liquid hybridization was more sensitive and the size of the hybridized PCR products could be detected simultaneously, but the advantage in Southern hybridization was that sample DNA could be used without additional PCR amplification. The impacts of four Finnish BNR Ceratorhiza sp. strains 251, 266, 268 and 269 were investigated on Scot pine (Pinus sylvestris) seedling growth, and the infection biology and infection levels were microscopically examined following tryphan blue staining of infected roots. All BNR strains enhanced early seedling growth and affected the root architecture, while the infection levels remained low. The fungal infection was restricted to the outer cortical regions of long roots and typical monilioid cells detected with strain 268. The interactions of pathogenic UNR Ceratobasidium bicorne strain 1983-111/1N, and endophytic BNR Ceratorhiza sp. strain 268 were studied in single or dual inoculated Scots pine roots. The fungal infection levels and host defence-gene activity of nine transcripts [phenylalanine ammonia lyase (pal1), silbene synthase (STS), chalcone synthase (CHS), short-root specific peroxidase (Psyp1), antimicrobial peptide gene (Sp-AMP), rapidly elicited defence-related gene (PsACRE), germin-like protein (PsGER1), CuZn- superoxide dismutase (SOD), and dehydrin-like protein (dhy-like)] were measured from differentially treated and un-treated control roots by quantitative real time PCR (qRT-PCR). The infection level of pathogenic UNR was restricted in BNR- pre-inoculated Scots pine roots, while UNR was more competitive in simultaneous dual infection. The STS transcript was highly up-regulated in all treated roots, while CHS, pal1, and Psyp1 transcripts were more moderately activated. No significant activity of Sp-AMP, PsACRE, PsGER1, SOD, or dhy-like transcripts were detected compared to control roots. The integrated experiments presented, provide tools to assist in the future detection of these fungi in the environment and to understand the host infection biology and defence, and relationships between these interacting fungi in roots and soils. This study further confirms the complexity of the Rhizoctonia group both phylogenetically and in their infection biology and plant host specificity. The knowledge obtained could be applied in integrated forestry nursery management programmes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Information exchange (IE) is a critical component of the complex collaborative medication process in residential aged care facilities (RACFs). Designing information and communication technology (ICT) to support complex processes requires a profound understanding of the IE that underpins their execution. There is little existing research that investigates the complexity of IE in RACFs and its impact on ICT design. The aim of this study was thus to undertake an in-depth exploration of the IE process involved in medication management to identify its implications for the design of ICT. The study was undertaken at a large metropolitan facility in NSW, Australia. A total of three focus groups, eleven interviews and two observation sessions were conducted between July to August 2010. Process modelling was undertaken by translating the qualitative data via in-depth iterative inductive analysis. The findings highlight the complexity and collaborative nature of IE in RACF medication management. These models emphasize the need to: a) deal with temporal complexity; b) rely on an interdependent set of coordinative artefacts; and c) use synchronous communication channels for coordination. Taken together these are crucial aspects of the IE process in RACF medication management that need to be catered for when designing ICT in this critical area. This study provides important new evidence of the advantages of viewing process as a part of a system rather than as segregated tasks as a means of identifying the latent requirements for ICT design and that is able to support complex collaborative processes like medication management in RACFs. © 2012 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper deals with low maximum-likelihood (ML)-decoding complexity, full-rate and full-diversity space-time block codes (STBCs), which also offer large coding gain, for the 2 transmit antenna, 2 receive antenna (2 x 2) and the 4 transmit antenna, 2 receive antenna (4 x 2) MIMO systems. Presently, the best known STBC for the 2 2 system is the Golden code and that for the 4 x 2 system is the DjABBA code. Following the approach by Biglieri, Hong, and Viterbo, a new STBC is presented in this paper for the 2 x 2 system. This code matches the Golden code in performance and ML-decoding complexity for square QAM constellations while it has lower ML-decoding complexity with the same performance for non-rectangular QAM constellations. This code is also shown to be information-lossless and diversity-multiplexing gain (DMG) tradeoff optimal. This design procedure is then extended to the 4 x 2 system and a code, which outperforms the DjABBA code for QAM constellations with lower ML-decoding complexity, is presented. So far, the Golden code has been reported to have an ML-decoding complexity of the order of for square QAM of size. In this paper, a scheme that reduces its ML-decoding complexity to M-2 root M is presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present a low-complexity algorithm for detection in high-rate, non-orthogonal space-time block coded (STBC) large-multiple-input multiple-output (MIMO) systems that achieve high spectral efficiencies of the order of tens of bps/Hz. We also present a training-based iterative detection/channel estimation scheme for such large STBC MIMO systems. Our simulation results show that excellent bit error rate and nearness-to-capacity performance are achieved by the proposed multistage likelihood ascent search (M-LAS) detector in conjunction with the proposed iterative detection/channel estimation scheme at low complexities. The fact that we could show such good results for large STBCs like 16 X 16 and 32 X 32 STBCs from Cyclic Division Algebras (CDA) operating at spectral efficiencies in excess of 20 bps/Hz (even after accounting for the overheads meant for pilot based training for channel estimation and turbo coding) establishes the effectiveness of the proposed detector and channel estimator. We decode perfect codes of large dimensions using the proposed detector. With the feasibility of such a low-complexity detection/channel estimation scheme, large-MIMO systems with tens of antennas operating at several tens of bps/Hz spectral efficiencies can become practical, enabling interesting high data rate wireless applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present a low-complexity, near maximum-likelihood (ML) performance achieving detector for large MIMO systems having tens of transmit and receive antennas. Such large MIMO systems are of interest because of the high spectral efficiencies possible in such systems. The proposed detection algorithm, termed as multistage likelihood-ascent search (M-LAS) algorithm, is rooted in Hopfield neural networks, and is shown to possess excellent performance as well as complexity attributes. In terms of performance, in a 64 x 64 V-BLAST system with 4-QAM, the proposed algorithm achieves an uncoded BER of 10(-3) at an SNR of just about 1 dB away from AWGN-only SISO performance given by Q(root SNR). In terms of coded BER, with a rate-3/4 turbo code at a spectral efficiency of 96 bps/Hz the algorithm performs close to within about 4.5 dB from theoretical capacity, which is remarkable in terms of both high spectral efficiency as well as nearness to theoretical capacity. Our simulation results show that the above performance is achieved with a complexity of just O(NtNt) per symbol, where N-t and N-tau denote the number of transmit and receive antennas.