112 resultados para Minimal-complexity classifier
em University of Queensland eSpace - Australia
Resumo:
This paper proposes a theoretical explanation of the variations of the sediment delivery ratio (SDR) versus catchment area relationships and the complex patterns in the behavior of sediment transfer processes at catchment scale. Taking into account the effects of erosion source types, deposition, and hydrological controls, we propose a simple conceptual model that consists of two linear stores arranged in series: a hillslope store that addresses transport to the nearest streams and a channel store that addresses sediment routing in the channel network. The model identifies four dimensionless scaling factors, which enable us to analyze a variety of effects on SDR estimation, including (1) interacting processes of erosion sources and deposition, (2) different temporal averaging windows, and (3) catchment runoff response. We show that the interactions between storm duration and hillslope/channel travel times are the major controls of peak-value-based sediment delivery and its spatial variations. The interplay between depositional timescales and the travel/residence times determines the spatial variations of total-volume-based SDR. In practical terms this parsimonious, minimal complexity model could provide a sound physical basis for diagnosing catchment to catchment variability of sediment transport if the proposed scaling factors can be quantified using climatic and catchment properties.
Resumo:
We give a simple proof of a formula for the minimal time required to simulate a two-qubit unitary operation using a fixed two-qubit Hamiltonian together with fast local unitaries. We also note that a related lower bound holds for arbitrary n-qubit gates.
Resumo:
Simplicity in design and minimal floor space requirements render the hydrocyclone the preferred classifier in mineral processing plants. Empirical models have been developed for design and process optimisation but due to the complexity of the flow behaviour in the hydrocyclone these do not provide information on the internal separation mechanisms. To study the interaction of design variables, the flow behaviour needs to be considered, especially when modelling the new three-product cyclone. Computational fluid dynamics (CFD) was used to model the three-product cyclone, in particular the influence of the dual vortex finder arrangement on flow behaviour. From experimental work performed on the UG2 platinum ore, significant differences in the classification performance of the three-product cyclone were noticed with variations in the inner vortex finder length. Because of this simulations were performed for a range of inner vortex finder lengths. Simulations were also conducted on a conventional hydrocyclone of the same size to enable a direct comparison of the flow behaviour between the two cyclone designs. Significantly, high velocities were observed for the three-product cyclone with an inner vortex finder extended deep into the conical section of the cyclone. CFD studies revealed that in the three-product cyclone, a cylindrical shaped air-core is observed similar to conventional hydrocyclones. A constant diameter air-core was observed throughout the inner vortex finder length, while no air-core was present in the annulus. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Since their discovery 150 years ago, Neanderthals have been considered incapable of behavioural change and innovation. Traditional synchronic approaches to the study of Neanderthal behaviour have perpetuated this view and shaped our understanding of their lifeways and eventual extinction. In this thesis I implement an innovative diachronic approach to the analysis of Neanderthal faunal extraction, technology and symbolic behaviour as contained in the archaeological record of the critical period between 80,000 and 30,000 years BP. The thesis demonstrates patterns of change in Neanderthal behaviour which are at odds with traditional perspectives and which are consistent with an interpretation of increasing behavioural complexity over time, an idea that has been suggested but never thoroughly explored in Neanderthal archaeology. Demonstrating an increase in behavioural complexity in Neanderthals provides much needed new data with which to fuel the debate over the behavioural capacities of Neanderthals and the first appearance of Modern Human Behaviour in Europe. It supports the notion that Neanderthal populations were active agents of behavioural innovation prior to the arrival of Anatomically Modern Humans in Europe and, ultimately, that they produced an early Upper Palaeolithic cultural assemblage (the Châtelperronian) independent of modern humans. Overall, this thesis provides an initial step towards the development of a quantitative approach to measuring behavioural complexity which provides fresh insights into the cognitive and behavioural capabilities of Neanderthals.
Resumo:
Diachronic approaches provide potential for a more sophisticated framework within which to examine change in Neanderthal behavioural complexity using archaeological proxies such as symbolic artefacts, faunal assemblages and technology. Analysis of the temporal appearance and distribution of such artefacts and assemblages provide the basis for identifying changes in Neanderthal behavioural complexity in terms of symbolism, faunal extraction and technology respectively. Although changes in technology and faunal extraction were examined in the wider study, only the results of the symbolic study are presented below to illustrate the potential of the approach.
Resumo:
This paper presents an analysis of dysfluencies in two oral tellings of a familiar children's story by a young boy with autism. Thurber & Tager-Flusberg (1993) postulate a lower degree of cognitive and communicative investment to explain a lower frequency of non-grammatical pauses observed in elicited narratives of children with autism in comparison to typically developing and intellectually disabled controls. we also found a very low frequency of non-grammatical pauses in our data, but indications of high engagement and cognitive and communicative investment. We point to a wider range of disfluencies as indicators of cognitive load, and show that the kind and location of dysfluencies produced may reveal which aspects of the narrative task are creating the greatest cognitive demand: here, mental state ascription, perspectivization, and adherence to story schema. This paper thus generates analytical options and hypotheses that can be explored further in a larger population of children with autism and typically developing controls.
Resumo:
This paper presents a new relative measure of signal complexity, referred to here as relative structural complexity, which is based on the matching pursuit (MP) decomposition. By relative, we refer to the fact that this new measure is highly dependent on the decomposition dictionary used by MP. The structural part of the definition points to the fact that this new measure is related to the structure, or composition, of the signal under analysis. After a formal definition, the proposed relative structural complexity measure is used in the analysis of newborn EEG. To do this, firstly, a time-frequency (TF) decomposition dictionary is specifically designed to compactly represent the newborn EEG seizure state using MP. We then show, through the analysis of synthetic and real newborn EEG data, that the relative structural complexity measure can indicate changes in EEG structure as it transitions between the two EEG states; namely seizure and background (non-seizure).
Resumo:
There are many factors which affect the L2 learner’s performance at the levels of phonology, morphology and syntax. Consequently when L2 learners attempt to communicate in the target language, their language production will show systematic variability across the above mentioned linguistic domains. This variation can be attributed to some factors such as interlocutors, topic familiarity, prior knowledge, task condition, planning time and tasks types. This paper reports the results of an on going research investigating the issue of variability attributed to the task type. It is hypothesized that the particular type of task learners are required to perform will result in variation in their performance. Results of the statistical analyses of this study investigating the issue of variation in the performance of twenty L2 learners at the English department of Tabriz University provided evidence in support of the hypothesis that performance of L2 learners show systematic variability attributed to task.
Resumo:
Data mining is the process to identify valid, implicit, previously unknown, potentially useful and understandable information from large databases. It is an important step in the process of knowledge discovery in databases, (Olaru & Wehenkel, 1999). In a data mining process, input data can be structured, seme-structured, or unstructured. Data can be in text, categorical or numerical values. One of the important characteristics of data mining is its ability to deal data with large volume, distributed, time variant, noisy, and high dimensionality. A large number of data mining algorithms have been developed for different applications. For example, association rules mining can be useful for market basket problems, clustering algorithms can be used to discover trends in unsupervised learning problems, classification algorithms can be applied in decision-making problems, and sequential and time series mining algorithms can be used in predicting events, fault detection, and other supervised learning problems (Vapnik, 1999). Classification is among the most important tasks in the data mining, particularly for data mining applications into engineering fields. Together with regression, classification is mainly for predictive modelling. So far, there have been a number of classification algorithms in practice. According to (Sebastiani, 2002), the main classification algorithms can be categorized as: decision tree and rule based approach such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier (Lewis, 1998); on-line methods such as Winnow (Littlestone, 1988) and CVFDT (Hulten 2001), neural networks methods (Rumelhart, Hinton & Wiliams, 1986); example-based methods such as k-nearest neighbors (Duda & Hart, 1973), and SVM (Cortes & Vapnik, 1995). Other important techniques for classification tasks include Associative Classification (Liu et al, 1998) and Ensemble Classification (Tumer, 1996).
Resumo:
The present study was designed to examine the main and interactive effects of task demands, work control, and task information on levels of adjustment. Task demands, work control, and task information were manipulated in an experimental setting where participants completed a letter-sorting activity (N = 128). Indicators of adjustment included measures of positive mood, participants' perceptions of task performance, and task satisfaction. Results of the present study provided some support for the main effects of objective task demands, work control, and task information on levels of adjustment. At the subjective level of analysis, there was some evidence to suggest that work control and task information interacted in their effects on levels of adjustment. There was minimal support for the proposal that work control and task information would buffer the negative effects of task demands on adjustment. There was, however, some evidence to suggest that the stress-buffering role of subjective work control was more marked at high, rather than low, levels of subjective task information.
Resumo:
Human N-acetyltransferase Type I (NAT1) catalyses the acetylation of many aromatic amine and hydrazine compounds and it has been implicated in the catabolism of folic acid. The enzyme is widely expressed in the body, although there are considerable differences in the level of activity between tissues. A search of the mRNA databases revealed the presence of several NAT1 transcripts in human tissue that appear to be derived from different promoters. Because little is known about NAT1 gene regulation, the present study was undertaken to characterize one of the putative promoter sequences of the NAT1 gene located just upstream of the coding region. We show with reverse-transcriptase PCR that mRNA transcribed from this promoter (Promoter 1) is present in a variety of human cell-lines, but not in quiescent peripheral blood mononuclear cells. Using deletion mutant constructs, we identified a 20 bp sequence located 245 bases upstream of the translation start site which was sufficient for basal NAT1 expression. It comprised an AP-1 (activator protein 1)-binding site, flanked on either side by a TCATT motif. Mutational analysis showed that the AP-1 site and the 3' TCATT sequence were necessary for gene expression, whereas the 5' TCATT appeared to attenuate promoter activity. Electromobility shift assays revealed two specific bands made up by complexes of c-Fos/Fra, c-Jun, YY-1 (Yin and Yang 1) and possibly Oct-1. PMA treatment enhanced expression from the NAT1 promoter via the AP-1-binding site. Furthermore, in peripheral blood mononuclear cells, PMA increased endogenous NAT1 activity and induced mRNA expression from Promoter I, suggesting that it is functional in vivo.
Resumo:
Quantifying mass and energy exchanges within tropical forests is essential for understanding their role in the global carbon budget and how they will respond to perturbations in climate. This study reviews ecosystem process models designed to predict the growth and productivity of temperate and tropical forest ecosystems. Temperate forest models were included because of the minimal number of tropical forest models. The review provides a multiscale assessment enabling potential users to select a model suited to the scale and type of information they require in tropical forests. Process models are reviewed in relation to their input and output parameters, minimum spatial and temporal units of operation, maximum spatial extent and time period of application for each organization level of modelling. Organizational levels included leaf-tree, plot-stand, regional and ecosystem levels, with model complexity decreasing as the time-step and spatial extent of model operation increases. All ecosystem models are simplified versions of reality and are typically aspatial. Remotely sensed data sets and derived products may be used to initialize, drive and validate ecosystem process models. At the simplest level, remotely sensed data are used to delimit location, extent and changes over time of vegetation communities. At a more advanced level, remotely sensed data products have been used to estimate key structural and biophysical properties associated with ecosystem processes in tropical and temperate forests. Combining ecological models and image data enables the development of carbon accounting systems that will contribute to understanding greenhouse gas budgets at biome and global scales.