84 resultados para limit sets

em Deakin Research Online - Australia


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Information is given a privileged place in the psychiatric clinic, as illustrated by the prevalence and volume of data to be collected and forms to be completed by psychiatric nurses. Information though is different to knowledge. The present paper argues that information is part of a managerial discourse that implies commodification whereas knowledge is part of a clinical discourse that allows room for the suffering of the patient. Information belongs to the discourse of managerialism, one that positions the patient as customer/consumer and in doing so renders them unsuffering. The patient's suffering is silenced by their construction as a consumer. The discourse of managerialism seeks a complete data set of information. By way of contrast, another discourse, that of psychoanalysis offers the institution the idea that there are always holes, gaps, and uncertainty. The idea of uncertainty, gaps, things remaining unknown and a limit sits uncomfortably with the dominant discourse of managerialism; one that demands no limits, complete data sets, and many satisfied customers. This market model of managerialism denies the potential of the therapeutic relationship; that something curative might be produced via the transference. In addition, the managerialist discourse potentially positions the patient as both illegitimate and unsuffering.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reviews the appropriateness for application to large data sets of standard machine learning algorithms, which were mainly developed in the context of small data sets. Sampling and parallelisation have proved useful means for reducing computation time when learning from large data sets. However, such methods assume that algorithms that were designed for use with what are now considered small data sets are also fundamentally suitable for large data sets. It is plausible that optimal learning from large data sets requires a different type of algorithm to optimal learning from small data sets. This paper investigates one respect in which data set size may affect the requirements of a learning algorithm — the bias plus variance decomposition of classification error. Experiments show that learning from large data sets may be more effective when using an algorithm that places greater emphasis on bias management, rather than variance management.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes an optimal strategy for extracting probabilistic rules from databases. Two inductive learning-based statistic measures and their rough set-based definitions: accuracy and coverage are introduced. The simplicity of a rule emphasized in this paper has previously been ignored in the discovery of probabilistic rules. To avoid the high computational complexity of rough-set approach, some rough-set terminologies rather than the approach itself are applied to represent the probabilistic rules. The genetic algorithm is exploited to find the optimal probabilistic rules that have the highest accuracy and coverage, and shortest length. Some heuristic genetic operators are also utilized in order to make the global searching and evolution of rules more efficiently. Experimental results have revealed that it run more efficiently and generate probabilistic classification rules of the same integrity when compared with traditional classification methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Selecting a set of features which is optimal for a given task is the problem which plays an important role in a wide variety of contexts including pattern recognition, images understanding and machine learning. The concept of reduction of the decision table based on the rough set is very useful for feature selection. In this paper, a genetic algorithm based approach is presented to search the relative reduct decision table of the rough set. This approach has the ability to accommodate multiple criteria such as accuracy and cost of classification into the feature selection process and finds the effective feature subset for texture classification . On the basis of the effective feature subset selected, this paper presents a method to extract the objects which are higher than their surroundings, such as trees or forest, in the color aerial images. The experiments results show that the feature subset selected and the method of the object extraction presented in this paper are practical and effective.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The rough set is a new mathematical approach to imprecision, vagueness and uncertainty. The concept of reduction of the decision table based on the rough sets is very useful for feature selection. The paper describes an application of rough sets method to feature selection and reduction in texture images recognition. The methods applied include continuous data discretization based on Fuzzy c-means and, and rough set method for feature selection and reduction. The trees extractions in the aerial images were applied. The experiments show that the methods presented in this paper are practical and effective.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study blind identification and equalization of finite impulse response (FIR) and multi-input and multi-output (MIMO) channels driven by colored signals. We first show a sufficient condition for an FIR MIMO channel to be identifiable up to a scaling and permutation using the second-order statistics of the channel output. This condition is that the channel matrix is irreducible (but not necessarily column-reduced), and the input signals are mutually uncorrelated and of distinct power spectra. We also show that this condition is necessary in the sense that no single part of the condition can be further weakened without another part being strengthened. While the above condition is a strong result that sets a fundamental limit of blind identification, there does not yet exist a working algorithm under that condition. In the second part of this paper, we show that a method called blind identification via decorrelating subchannels (BIDS) can uniquely identify an FIR MIMO channel if a) the channel matrix is nonsingular (almost everywhere) and column-wise coprime and b) the input signals are mutually uncorrelated and of sufficiently diverse power spectra. The BIDS method requires a weaker condition on the channel matrix than that required by most existing methods for the same problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the key applications of microarray studies is to select and classify gene expression profiles of cancer and normal subjects. In this study, two hybrid approaches–genetic algorithm with decision tree (GADT) and genetic algorithm with neural network (GANN)–are utilized to select optimal gene sets which contribute to the highest classification accuracy. Two benchmark microarray datasets were tested, and the most significant disease related genes have been identified. Furthermore, the selected gene sets achieved comparably high sample classification accuracy (96.79% and 94.92% in colon cancer dataset, 98.67% and 98.05% in leukemia dataset) compared with those obtained by mRMR algorithm. The study results indicate that these two hybrid methods are able to select disease related genes and improve classification accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data streams are usually generated in an online fashion characterized by huge volume, rapid unpredictable rates, and fast changing data characteristics. It has been hence recognized that mining over streaming data requires the problem of limited computational resources to be adequately addressed. Since the arrival rate of data streams can significantly increase and exceed the CPU capacity, the machinery must adapt to this change to guarantee the timeliness of the results. We present an online algorithm to approximate a set of frequent patterns from a sliding window over the underlying data stream - given apriori CPU capacity. The algorithm automatically detects overload situations and can adaptively shed unprocessed data to guarantee the timely results. We theoretically prove, using probabilistic and deterministic techniques, that the error on the output results is bounded within a pre-specified threshold. The empirical results on various datasets also confirmed the feasiblity of our proposal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A retrospective assessment of exposure to benzene was carried out for a nested case control study of lympho-haematopoietic cancers, including leukaemia, in the Australian petroleum industry. Each job or task in the industry was assigned a Base Estimate (BE) of exposure derived from task-based personal exposure assessments carried out by the company occupational hygienists. The BEs corresponded to the estimated arithmetic mean exposure to benzene for each job or task and were used in a deterministic algorithm to estimate the exposure of subjects in the study. Nearly all of the data sets underlying the BEs were found to contain some values below the limit of detection (LOD) of the sampling and analytical methods and some were very heavily censored; up to 95% of the data were below the LOD in some data sets. It was necessary, therefore, to use a method of calculating the arithmetic mean exposures that took into account the censored data. Three different methods were employed in an attempt to select the most appropriate method for the particular data in the study. A common method is to replace the missing (censored) values with half the detection limit. This method has been recommended for data sets where much of the data are below the limit of detection or where the data are highly skewed; with a geometric standard deviation of 3 or more. Another method, involving replacing the censored data with the limit of detection divided by the square root of 2, has been recommended when relatively few data are below the detection limit or where data are not highly skewed. A third method that was examined is Cohen's method. This involves mathematical extrapolation of the left-hand tail of the distribution, based on the distribution of the uncensored data, and calculation of the maximum likelihood estimate of the arithmetic mean. When these three methods were applied to the data in this study it was found that the first two simple methods give similar results in most cases. Cohen's method on the other hand, gave results that were generally, but not always, higher than simpler methods and in some cases gave extremely high and even implausible estimates of the mean. It appears that if the data deviate substantially from a simple log-normal distribution, particularly if high outliers are present, then Cohen's method produces erratic and unreliable estimates. After examining these results, and both the distributions and proportions of censored data, it was decided that the half limit of detection method was most suitable in this particular study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The maximum speed at which magnesium can be extruded is considerably slower than that of many common aluminium extrusion alloys. This affects both the economies of production and the final mechanical behaviour. The present work quantifies the limiting extrusion speeds and ratios of magnesium alloy AZ31 as a function of billet temperature. This is done by combining hot compression test results, FE simulations and extrusion trials. Hot working stress–strain curves displayed a distinct dynamic recrystallisation peak. These data were used as a “look-up” table for the FE simulations in which the cracking limit was assumed to occur when the surface temperature reaches the incipient melting point. The maximum extrusion ratio predicted using FE analysis dropped from 90 to 40 when the extrusion ram speed was raised from 5 to 50 mm/s. The predicted limits agree well with the occurrence of cracking in both a laboratory and a commercial extrusion trial.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In data stream applications, a good approximation obtained in a timely  manner is often better than the exact answer that’s delayed beyond the window of opportunity. Of course, the quality of the approximate is as important as its timely delivery. Unfortunately, algorithms capable of online processing do not conform strictly to a precise error guarantee. Since online processing is essential and so is the precision of the error, it is necessary that stream algorithms meet both criteria. Yet, this is not the case for mining frequent sets in data streams. We present EStream, a novel algorithm that allows online processing while producing results strictly within the error bound. Our theoretical and experimental results show that EStream is a better candidate for finding frequent sets in data streams, when both constraints need to be satisfied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Diving animals offer a unique opportunity to study the importance of physiological constraint and the limitation it can impose on animal's behaviour in nature. This paper examines the interaction between physiology and behaviour and its impact on the diving capability of five eared seal species (Family Otariidae; three sea lions and two fur seals). An important physiological component of diving marine mammals is the aerobic dive limit (ADL). The ADL of these five seal species was estimated from measurements of their total body oxygen stores, coupled with estimates of their metabolic rate while diving. The tendency of each species to exceed its calculated ADL was compared relative to its diving behaviour. Overall, our analyses reveal that seals which forage benthically (i.e. on the sea floor) have a greater tendency to approach or exceed their ADL compared to seals that forage epipelagically (i.e. near the sea surface). Furthermore, the marked differences in foraging behaviour and physiology appear to be coupled with a species demography. For example, benthic foraging species have smaller populations and lower growth rates compared to seal species that forage epipelagically. These patterns are relevant to the conservation and management of diving vertebrates.