73 resultados para Statistical packages


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Statistical copolymers of indigo (1a) and N-acetylindigo (1b) building blocks with defined structures were studied. They belong to the class of polymeric colorants. The polymers consist of 5,5′-connected indigo units with keto structure and N-acetylindigo units with uncommon tautomeric indoxyl/indolone (=1H-indol-3-ol/3H-indol-3-one) structure (see 2a and 2b in Fig. 1). They formed amorphous salts of elongated monomer lengths as compared to monomeric indigo. The polymers were studied by various spectroscopic and physico-chemical methods in solid state and in solution. As shown by small-angle-neutron scattering (SANS) and transmission-electron microscopy (TEM), disk-like polymeric aggregates were present in concentrated solutions (DMSO and aq. NaOH soln.). Their thickness and radii were determined to be ca. 0.4 and ca. 80 nm, respectively. From the disk volumes and by a Guinier analysis, the molecular masses of the aggregates were calculated, which were in good agreement with each other. Defined structural changes of the polymer chains were observed during several-weeks storage in concentrated DMSO solutions. The original keto structure of the unsubstituted indigo building blocks reverted to the more flexible indoxyl/indolone structure. The new polymers were simultaneously stabilized by intermolecular H-bonds to give aggregates, preferentially dimers. Both aggregation and tautomerization were reversible upon dissolution. The polymers were synthesized by repeated oxidative coupling of 1,1′-diacetyl-3,3′-dihydroxybis-indoles 5 (from 1,1′-diacetyl-3,3′-bis(acetyloxy)bis-indoles 6) followed by gradual hydrolysis of the primarily formed poly(N,N′-diacetylindigos) 7 (Scheme). N,N′-Diacetylbis-anthranilic acids 9 were isolated as by-products.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Plasminogen (Pg), the precursor of the proteolytic and fibrinolytic enzyme of blood, is converted to the active enzyme plasmin (Pm) by different plasminogen activators (tissue plasminogen activators and urokinase), including the bacterial activators streptokinase and staphylokinase, which activate Pg to Pm and thus are used clinically for thrombolysis. The identification of Pg-activators is therefore an important step in understanding their functional mechanism and derives new therapies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives: To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Design: Statistical review. Data sources: Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Review methods: Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. Results: The median power to detect small, medium, and large effect sizes was .40 (interquartile range [. IQR]. = .24-.71), .98 (IQR= .85-1.00), and 1.00 (IQR= 1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR= .26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. Conclusion: The use, reporting, and interpretation of inferential statistics in nursing research need substantial improvement. Most importantly, researchers should abandon the misleading practice of interpreting the results from inferential tests based solely on whether they are statistically significant (or not) and, instead, focus on reporting and interpreting effect sizes, confidence intervals, and significance levels. Nursing researchers also need to conduct and report a priori power analyses, and to address the issue of Type I experiment-wise error inflation in their studies. © 2013 .

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The introduction of linear functions is the turning point where many students decide if mathematics is useful or not. This means the role of parameters and variables in linear functions could be considered to be ‘threshold concepts’. There is recognition that linear functions can be taught in context through the exploration of linear modelling examples, but this has its limitations. Currently, statistical data is easily attainable, and graphics or computer algebra system (CAS) calculators are common in many classrooms. The use of this technology provides ease of access to different representations of linear functions as well as the ability to fit a least-squares line for real-life data. This means these calculators could support a possible alternative approach to the introduction of linear functions. This study compares the results of an end-oftopic test for two classes of Australian middle secondary students at a regional school to determine if such an alternative approach is feasible. In this study, test questions were grouped by concept and subjected to concept by concept analysis of the means of test results of the two classes. This analysis revealed that the students following the alternative approach demonstrated greater competence with non-standard questions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Retrieval systems with non-deterministic output are widely used in information retrieval. Common examples include sampling, approximation algorithms, or interactive user input. The effectiveness of such systems differs not just for different topics, but also for different instances of the system. The inherent variance presents a dilemma - What is the best way to measure the effectiveness of a non-deterministic IR system? Existing approaches to IR evaluation do not consider this problem, or the potential impact on statistical significance. In this paper, we explore how such variance can affect system comparisons, and propose an evaluation framework and methodologies capable of doing this comparison. Using the context of distributed information retrieval as a case study for our investigation, we show that the approaches provide a consistent and reliable methodology to compare the effectiveness of a non-deterministic system with a deterministic or another non-deterministic system. In addition, we present a statistical best-practice that can be used to safely show how a non-deterministic IR system has equivalent effectiveness to another IR system, and how to avoid the common pitfall of misusing a lack of significance as a proof that two systems have equivalent effectiveness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract: Background Inequalities in eating behaviours are often linked to the types of food retailers accessible in neighbourhood environments. Numerous studies have aimed to identify if access to healthy and unhealthy food retailers is socioeconomically patterned across neighbourhoods, and thus a potential risk factor for dietary inequalities. Existing reviews have examined differences between methodologies, particularly focussing on neighbourhood and food outlet access measure definitions. However, no review has informatively discussed the suitability of the statistical methodologies employed; a key issue determining the validity of study findings. Our aim was to examine the suitability of statistical approaches adopted in these analyses.
Methods: Searches were conducted for articles published from 2000–2014. Eligible studies included objective measures of the neighbourhood food environment and neighbourhood-level socio-economic status, with a statistical analysis of the association between food outlet access and socio-economic status.
Results Fifty four papers were included. Outlet accessibility was typically defined as the distance to the nearest outlet from the neighbourhood centroid, or as the numberof food outlets within a neighbourhood (or buffer). To assess if these measures were linked to neighbourhood disadvantage, common statistical methods included ANOVA, correlation, and Poisson or negative binomial regression. Although all studies involved spatial data, few considered spatial analysis techniques or spatial autocorrelation.
Conclusions: With advances in GIS software, sophisticated measures of neighbourhood outlet accessibility can be considered. However, approaches to statistical analysis often appear less sophisticated. Care should be taken to consider assumptions underlying the analysis and the possibility of spatially correlated residuals which could affect the results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: To provide statistician end users with a visual language environment for complex statistical survey design and implementation. Methods: We have developed, in conjunction with professional statisticians, the Statistical Design Language (SDL), an integrated suite of visual languages aimed at supporting the process of designing statistical surveys, and its support environment, SDLTool. SDL comprises five diagrammatic notations: survey diagrams, data diagrams, technique diagrams, task diagrams and process diagrams. SDLTool provides an integrated environment supporting design, coordination, execution, sharing and publication of complex statistical survey techniques as web services. SDLTool allows association of model components with survey artefacts, including data sets, metadata, and statistical package analysis scripts, with the ability to execute elements of the survey design model to implement survey analysis. Results: We describe three evaluations of SDL and SDLTool: use of the notation by expert statistician to design and execute surveys; useability evaluation of the environment; and assessment of several generated statistical analysis web services. Conclusion: We have shown the effectiveness of SDLTool for supporting statistical survey design and implementation. Practice implications: We have developed a more effective approach to supporting statisticians in their survey design work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Artificial neural network (ANN) models are able to predict future events based on current data. The usefulness of an ANN lies in the capacity of the model to learn and adjust the weights following previous errors during training. In this study, we carefully analyse the existing methods in neuronal spike sorting algorithms. The current methods use clustering as a basis to establish the ground truths, which requires tedious procedures pertaining to feature selection and evaluation of the selected features. Even so, the accuracy of clusters is still questionable. Here, we develop an ANN model to specially address the present drawbacks and major challenges in neuronal spike sorting. New enhancements are introduced into the conventional backpropagation ANN for determining the network weights, input nodes, target node, and error calculation. Coiflet modelling of noise is employed to enhance the spike shape features and overshadow noise. The ANN is used in conjunction with a special spiking event detection technique to prioritize the targets. The proposed enhancements are able to bolster the training concept, and on the whole, contributing to sorting neuronal spikes with close approximations.