891 resultados para Applied Behavior Analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The multifractal properties of two indices of geomagnetic activity, D st (representative of low latitudes) and a p (representative of the global geomagnetic activity), with the solar X-ray brightness, X l , during the period from 1 March 1995 to 17 June 2003 are examined using multifractal detrended fluctuation analysis (MF-DFA). The h(q) curves of D st and a p in the MF-DFA are similar to each other, but they are different from that of X l , indicating that the scaling properties of X l are different from those of D st and a p . Hence, one should not predict the magnitude of magnetic storms directly from solar X-ray observations. However, a strong relationship exists between the classes of the solar X-ray irradiance (the classes being chosen to separate solar flares of class X-M, class C, and class B or less, including no flares) in hourly measurements and the geomagnetic disturbances (large to moderate, small, or quiet) seen in D st and a p during the active period. Each time series was converted into a symbolic sequence using three classes. The frequency, yielding the measure representations, of the substrings in the symbolic sequences then characterizes the pattern of space weather events. Using the MF-DFA method and traditional multifractal analysis, we calculate the h(q), D(q), and τ (q) curves of the measure representations. The τ (q) curves indicate that the measure representations of these three indices are multifractal. On the basis of this three-class clustering, we find that the h(q), D(q), and τ (q) curves of the measure representations of these three indices are similar to each other for positive values of q. Hence, a positive flare storm class dependence is reflected in the scaling exponents h(q) in the MF-DFA and the multifractal exponents D(q) and τ (q). This finding indicates that the use of the solar flare classes could improve the prediction of the D st classes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose - Since the beginning of human existence, humankind has sought, organized and used information as it evolved patterns and practices of human information behaviors. However, the field of human information behavior (HIB) has not heretofore pursued an evolutionary understanding of information behavior. The goal of this exploratory study is to provide insight about the information behavior of various individuals from the past to begin the development of an evolutionary perspective for our understanding of HIB. Design/methodology/approach - This paper presents findings from a qualitative analysis of the autobiographies and personal writings of several historical figures, including Napoleon Bonaparte, Charles Darwin, Giacomo Casanova and others. Findings - Analysis of their writings shows that these persons of the past articulated aspects of their HIB's, including information seeking, information organization and information use, providing tangible insights into their information-related thoughts and actions. Practical implications - This paper has implications for expanding the nature of our evolutionary understanding of information behavior and provides a broader context for the HIB research field. Originality/value - This the first paper in the information science field of HIB to study the information behavior of historical figures and begin to develop an evolutionary framework for HIB research. © Emerald Group Publishing Limited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bioinformatics involves analyses of biological data such as DNA sequences, microarrays and protein-protein interaction (PPI) networks. Its two main objectives are the identification of genes or proteins and the prediction of their functions. Biological data often contain uncertain and imprecise information. Fuzzy theory provides useful tools to deal with this type of information, hence has played an important role in analyses of biological data. In this thesis, we aim to develop some new fuzzy techniques and apply them on DNA microarrays and PPI networks. We will focus on three problems: (1) clustering of microarrays; (2) identification of disease-associated genes in microarrays; and (3) identification of protein complexes in PPI networks. The first part of the thesis aims to detect, by the fuzzy C-means (FCM) method, clustering structures in DNA microarrays corrupted by noise. Because of the presence of noise, some clustering structures found in random data may not have any biological significance. In this part, we propose to combine the FCM with the empirical mode decomposition (EMD) for clustering microarray data. The purpose of EMD is to reduce, preferably to remove, the effect of noise, resulting in what is known as denoised data. We call this method the fuzzy C-means method with empirical mode decomposition (FCM-EMD). We applied this method on yeast and serum microarrays, and the silhouette values are used for assessment of the quality of clustering. The results indicate that the clustering structures of denoised data are more reasonable, implying that genes have tighter association with their clusters. Furthermore we found that the estimation of the fuzzy parameter m, which is a difficult step, can be avoided to some extent by analysing denoised microarray data. The second part aims to identify disease-associated genes from DNA microarray data which are generated under different conditions, e.g., patients and normal people. We developed a type-2 fuzzy membership (FM) function for identification of diseaseassociated genes. This approach is applied to diabetes and lung cancer data, and a comparison with the original FM test was carried out. Among the ten best-ranked genes of diabetes identified by the type-2 FM test, seven genes have been confirmed as diabetes-associated genes according to gene description information in Gene Bank and the published literature. An additional gene is further identified. Among the ten best-ranked genes identified in lung cancer data, seven are confirmed that they are associated with lung cancer or its treatment. The type-2 FM-d values are significantly different, which makes the identifications more convincing than the original FM test. The third part of the thesis aims to identify protein complexes in large interaction networks. Identification of protein complexes is crucial to understand the principles of cellular organisation and to predict protein functions. In this part, we proposed a novel method which combines the fuzzy clustering method and interaction probability to identify the overlapping and non-overlapping community structures in PPI networks, then to detect protein complexes in these sub-networks. Our method is based on both the fuzzy relation model and the graph model. We applied the method on several PPI networks and compared with a popular protein complex identification method, the clique percolation method. For the same data, we detected more protein complexes. We also applied our method on two social networks. The results showed our method works well for detecting sub-networks and give a reasonable understanding of these communities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main constituents of red mud produced in Aluminio city (S.P. – Brazil) are iron, aluminium and silicon oxides. It has been determined that the average particle diameter for this red mud is between 0.05 and 0.002mm. It is observed that a decrease in the percentage of smaller particles occurs at temperatures greater than 400°C. This observation corresponds with the thermal analysis and X-ray diffraction (XRD) data, which illustrate the phase transition of goethite to hematite. A 10% mass loss is observed in the thermal analysis patterns due to the hydroxide – oxide phase transitions of iron (primary phase transition) and aluminium (to a lesser extent). The disappearance and appearance of the different phases of iron and aluminium confirms the decomposition reactions proposed by the thermal analysis data. This Brazilian red mud has been classified as mesoporous at all temperatures except between 400 and 500°C where the classification changes to micro/mesoporous.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, spatially offset Raman spectroscopy (SORS) is demonstrated for non-invasively investigating the composition of drug mixtures inside an opaque plastic container. The mixtures consisted of three components including a target drug (acetaminophen or phenylephrine hydrochloride) and two diluents (glucose and caffeine). The target drug concentrations ranged from 5% to 100%. After conducting SORS analysis to ascertain the Raman spectra of the concealed mixtures, principal component analysis (PCA) was performed on the SORS spectra to reveal trends within the data. Partial least squares (PLS) regression was used to construct models that predicted the concentration of each target drug, in the presence of the other two diluents. The PLS models were able to predict the concentration of acetaminophen in the validation samples with a root-mean-square error of prediction (RMSEP) of 3.8% and the concentration of phenylephrine hydrochloride with an RMSEP of 4.6%. This work demonstrates the potential of SORS, used in conjunction with multivariate statistical techniques, to perform non-invasive, quantitative analysis on mixtures inside opaque containers. This has applications for pharmaceutical analysis, such as monitoring the degradation of pharmaceutical products on the shelf, in forensic investigations of counterfeit drugs, and for the analysis of illicit drug mixtures which may contain multiple components.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Existing algebraic analyses of the ZUC cipher indicate that the cipher should be secure against algebraic attacks. In this paper, we present an alternative algebraic analysis method for the ZUC stream cipher, where a combiner is used to represent the nonlinear function and to derive equations representing the cipher. Using this approach, the initial states of ZUC can be recovered from 2^97 observed words of keystream, with a complexity of 2^282 operations. This method is more successful when applied to a modified version of ZUC, where the number of output words per clock is increased. If the cipher outputs 120 bits of keystream per clock, the attack can succeed with 219 observed keystream bits and 2^47 operations. Therefore, the security of ZUC against algebraic attack could be significantly reduced if its throughput was to be increased for efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Both the SSS and SOBER-t32 stream cipher designs use a single word-based shift register and a nonlinear filter function to produce keystream. In this paper we show that the algebraic attack method previously applied to SOBER-t32 is prevented from succeeding on SSS by the use of the key dependent substitution box (SBox) in the nonlinear filter of SSS. Additional assumptions and modifications to the SSS cipher in an attempt to enable algebraic analysis result in other difficulties that also render the algebraic attack infeasible. Based on these results, we conclude that a well chosen key-dependent substitution box used in the nonlinear filter of the stream cipher provides resistance against such algebraic attacks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel method for genotyping the clustered, regularly interspaced short-palindromic-repeat (CRISPR) locus of Campylobacter jejuni is described. Following real-time PCR, CRISPR products were subjected to high-resolution melt (HRM) analysis, a new technology that allows precise melt profile determination of amplicons. This investigation shows that the CRISPR HRM assay provides a powerful addition to existing C. jejuni genotyping methods and emphasizes the potential of HRM for genotyping short sequence repeats in other species

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, Opinion Mining is getting more important than before especially in doing analysis and forecasting about customers’ behavior for businesses purpose. The right decision in producing new products or services based on data about customers’ characteristics means profit for organization/company. This paper proposes a new architecture for Opinion Mining, which uses a multidimensional model to integrate customers’ characteristics and their comments about products (or services). The key step to achieve this objective is to transfer comments (opinions) to a fact table that includes several dimensions, such as, customers, products, time and locations. This research presents a comprehensive way to calculate customers’ orientation for all possible products’ attributes. A use case study is also presented in this paper to show the advantages of using OLAP and data cubes to analyze costumers’ opinions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Concerns regarding groundwater contamination with nitrate and the long-term sustainability of groundwater resources have prompted the development of a multi-layered three dimensional (3D) geological model to characterise the aquifer geometry of the Wairau Plain, Marlborough District, New Zealand. The 3D geological model which consists of eight litho-stratigraphic units has been subsequently used to synthesise hydrogeological and hydrogeochemical data for different aquifers in an approach that aims to demonstrate how integration of water chemistry data within the physical framework of a 3D geological model can help to better understand and conceptualise groundwater systems in complex geological settings. Multivariate statistical techniques(e.g. Principal Component Analysis and Hierarchical Cluster Analysis) were applied to groundwater chemistry data to identify hydrochemical facies which are characteristic of distinct evolutionary pathways and a common hydrologic history of groundwaters. Principal Component Analysis on hydrochemical data demonstrated that natural water-rock interactions, redox potential and human agricultural impact are the key controls of groundwater quality in the Wairau Plain. Hierarchical Cluster Analysis revealed distinct hydrochemical water quality groups in the Wairau Plain groundwater system. Visualisation of the results of the multivariate statistical analyses and distribution of groundwater nitrate concentrations in the context of aquifer lithology highlighted the link between groundwater chemistry and the lithology of host aquifers. The methodology followed in this study can be applied in a variety of hydrogeological settings to synthesise geological, hydrogeological and hydrochemical data and present them in a format readily understood by a wide range of stakeholders. This enables a more efficient communication of the results of scientific studies to the wider community.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Each financial year concessions, benefits and incentives are delivered to taxpayers via the tax system. These concessions, benefits and incentives, referred to as tax expenditure, differ from direct expenditure because of the recurring fiscal impact without regular scrutiny through the federal budget process. There are approximately 270 different tax expenditures existing within the current tax regime with total measured tax expenditures in the 2005-06 financial year estimated to be around $42.1 billion, increasing to $52.7 billion by 2009-10. Each year, new tax expenditures are introduced, while existing tax expenditures are modified and deleted. In recognition of some of the problems associated with tax expenditure, a Tax Expenditure Statement, as required by the Charter of Budget Honesty Act 1988, is produced annually by the Australian Federal Treasury. The Statement details the various expenditures and measures in the form of concessions, benefits and incentives provided to taxpayers by the Australian Government and calculates the tax expenditure in terms of revenue forgone. A similar approach to reporting tax expenditure, with such a report being a legal requirement, is followed by most OECD countries. The current Tax Expenditure Statement lists 270 tax expenditures and where it is able to, reports on the estimated pecuniary value of those expenditures. Apart from the annual Tax Expenditure Statement, there is very little other scrutiny of Australia’s Federal tax expenditure program. While there has been various academic analysis of tax expenditure in Australia, when compared to the North American literature, it is suggested that the Australian literature is still in its infancy. In fact, one academic author who has contributed to tax expenditure analysis recently noted that there is ‘remarkably little secondary literature which deals at any length with tax expenditures in the Australian context.’ Given this perceived gap in the secondary literature, this paper examines fundamental concept of tax expenditure and considers the role it plays in to the current tax regime as a whole, along with the effects of the introduction of new tax expenditures. In doing so, tax expenditure is contrasted with direct expenditure. An analysis of tax expenditure versus direct expenditure is already a sophisticated and comprehensive body of work stemming from the US over the last three decades. As such, the title of this paper is rather misleading. However, given the lack of analysis in Australia, it is appropriate that this paper undertakes a consideration of tax expenditure versus direct expenditure in an Australian context. Given this proposition, rather than purport to undertake a comprehensive analysis of tax expenditure which has already been done, this paper discusses the substantive considerations of any such analysis to enable further investigation into the tax expenditure regime both as a whole and into individual tax expenditure initiatives. While none of the propositions in this paper are new in a ‘tax expenditure analysis’ sense, this debate is a relatively new contribution to the Australian literature on the tax policy. Before the issues relating to tax expenditure can be determined, it is necessary to consider what is meant by ‘tax expenditure’. As such, part two if this paper defines ‘tax expenditure’. Part three determines the framework in which tax expenditure can be analysed. It is suggested that an analysis of tax expenditure must be evaluated within the framework of the design criteria of an income tax system with the key features of equity, efficiency, and simplicity. Tax expenditure analysis can then be applied to deviations from the ideal tax base. Once it is established what is meant by tax expenditure and the framework for evaluation is determined, it is possible to establish the substantive issues to be evaluated. This paper suggests that there are four broad areas worthy of investigation; economic efficiency, administrative efficiency, whether tax expenditure initiatives achieve their policy intent, and the impact on stakeholders. Given these areas of investigation, part four of this paper considers the issues relating to the economic efficiency of the tax expenditure regime, in particular, the effect on resource allocation, incentives for taxpayer behaviour and distortions created by tax expenditures. Part five examines the notion of administrative efficiency in light of the fact that most tax expenditures could simply be delivered as direct expenditures. Part six explores the notion of policy intent and considers the two questions that need to be asked; whether any tax expenditure initiative reaches its target group and whether the financial incentives are appropriate. Part seven examines the impact on stakeholders. Finally, part eight considers the future of tax expenditure analysis in Australia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The importance of actively managing and analyzing business processes is acknowledged more than ever in organizations nowadays. Business processes form an essential part of an organization and their ap-plication areas are manifold. Most organizations keep records of various activities that have been carried out for auditing purposes, but they are rarely used for analysis purposes. This paper describes the design and implementation of a process analysis tool that replays, analyzes and visualizes a variety of performance metrics using a process definition and its execution logs. Performing performance analysis on existing and planned process models offers a great way for organizations to detect bottlenecks within their processes and allow them to make more effective process improvement decisions. Our technique is applied to processes modeled in the YAWL language. Execution logs of process instances are compared against the corresponding YAWL process model and replayed in a robust manner, taking into account any noise in the logs. Finally, performance characteristics, obtained from replaying the log in the model, are projected onto the model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper seeks to explain the lagging productivity in Singapore’s manufacturing noted in the statements of the Economic Strategies Committee Report 2010. Two methods are employed: the Malmquist productivity to measure total factor productivity change and Simar and Wilson’s (J Econ, 136:31–64, 2007) bootstrapped truncated regression approach. In the first stage, the nonparametric data envelopment analysis is used to measure technical efficiency. To quantify the economic drivers underlying inefficiencies, the second stage employs a bootstrapped truncated regression whereby bias-corrected efficiency estimates are regressed against explanatory variables. The findings reveal that growth in total factor productivity was attributed to efficiency change with no technical progress. Most industries were technically inefficient throughout the period except for ‘Pharmaceutical Products’. Sources of efficiency were attributed to quality of worker and flexible work arrangements while incessant use of foreign workers lowered efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The kaolinite intercalation and its application in polymer-based functional composites have attracted great interest, both in industry and in academia fields, since they frequently exhibit remarkable improvements in materials properties compared with the virgin polymer or conventional micro and macro-composites. Also of significant interest regarding the kaolinite intercalation complex is its thermal behavior and decomposition. This is because heating treatment of intercalated kaolinite is necessary for its further application, especially in the field of plastic and rubber industry. Although intercalation of kaolinite is an old and ongoing research topic, there is a limited knowledge available on kaolinite intercalation with different reagents, the mechanism of intercalation complex formation as well as on thermal behavior and phase transition. This review attempts to summarize the most recent achievements in the thermal behavior study of kaolinite intercalation complexes obtained with the most common reagents including potassium acetate, formamide, dimethyl sulfoxide, hydrazine and urea. At the end of this paper, the further work on kaolinite intercalation complex was also proposed.