245 resultados para Sequential extraction tests
Resumo:
In this paper, we propose a novel relay ordering and scheduling strategy for the sequential slotted amplify-and-forward (SAF) protocol and evaluate its performance in terms of diversity-multiplexing trade-off (DMT). The relays between the source and destination are grouped into two relay clusters based on their respective locations. The proposed strategy achieves partial relay isolation and decreases the decoding complexity at the destination. We show that the DMT upper bound of sequential-SAF with the proposed strategy outperforms other amplify and forward protocols and is more practical compared to the relay isolation assumption made in the original paper [1]. Simulation result shows that the sequential-SAF protocol with the proposed strategy has better outage performance compared to the existing AF and non-cooperative protocols in high SNR regime.
Resumo:
In this paper, we propose a novel slotted hybrid cooperative protocol named the sequential slotted amplify-decodeand-forward (SADF) protocol and evaluate its performance in terms of diversity-multiplexing trade-off (DMT). The relays between the source and destination are divided into two different groups and each relay either amplifies or decodes the received signal. We first compute the optimal DMT of the proposed protocol with the assumption of perfect decoding at the DF relays. We then derive the DMT closed-form expression of the proposed sequential-SADF and obtain the proximity gain bound for achieving the optimal DMT. With the proximity gain bound, we then found the distance ratio to achieve the optimal DMT performance. Simulation result shows that the proposed protocol with high proximity gain outperforms other cooperative communication protocols in high SNR regime.
Resumo:
Current design standards do not provide adequate guidelines for the fire design of cold-formed steel compression members subject to flexural-torsional buckling. Eurocode 3 Part 1.2 (2005) recommends the same fire design guidelines for both hot-rolled and cold-formed steel compression members subject to flexural-torsional buckling although considerable behavioural differences exist between cold-formed and hot-rolled steel members. Past research has recommended the use of ambient temperature cold-formed steel design rules for the fire design of cold-formed steel compression members provided appropriately reduced mechanical properties are used at elevated temperatures. To assess the accuracy of flexural-torsional buckling design rules in both ambient temperature cold-formed steel design and fire design standards, an experimental study of slender cold-formed steel compression members was undertaken at both ambient and elevated temperatures. This paper presents the details of this experimental study, its results, and their comparison with the predictions from the current design rules. It was found that the current ambient temperature design rules are conservative while the fire design rules are overly conservative. Suitable recommendations have been made in relation to the currently available design rules for flexural-torsional buckling including methods of improvement. Most importantly, this paper has addressed the lack of experimental results for slender cold-formed steel columns at elevated temperatures.
Resumo:
Chatrooms, for example Internet Relay Chat, are generally multi-user, multi-channel and multiserver chat-systems which run over the Internet and provide a protocol for real-time text-based conferencing between users all over the world. While a well-trained human observer is able to understand who is chatting with whom, there are no efficient and accurate automated tools to determine the groups of users conversing with each other. A precursor to analysing evolving cyber-social phenomena is to first determine what the conversations are and which groups of chatters are involved in each conversation. We consider this problem in this paper. We propose an algorithm to discover all groups of users that are engaged in conversation. Our algorithms are based on a statistical model of a chatroom that is founded on our experience with real chatrooms. Our approach does not require any semantic analysis of the conversations, rather it is based purely on the statistical information contained in the sequence of posts. We improve the accuracy by applying some graph algorithms to clean the statistical information. We present some experimental results which indicate that one can automatically determine the conversing groups in a chatroom, purely on the basis of statistical analysis.
Resumo:
The future emergence of many types of airborne vehicles and unpiloted aircraft in the national airspace means collision avoidance is of primary concern in an uncooperative airspace environment. The ability to replicate a pilot’s see and avoid capability using cameras coupled with vision based avoidance control is an important part of an overall collision avoidance strategy. But unfortunately without range collision avoidance has no direct way to guarantee a level of safety. Collision scenario flight tests with two aircraft and a monocular camera threat detection and tracking system were used to study the accuracy of image-derived angle measurements. The effect of image-derived angle errors on reactive vision-based avoidance performance was then studied by simulation. The results show that whilst large angle measurement errors can significantly affect minimum ranging characteristics across a variety of initial conditions and closing speeds, the minimum range is always bounded and a collision never occurs.
Resumo:
With the overwhelming increase in the amount of texts on the web, it is almost impossible for people to keep abreast of up-to-date information. Text mining is a process by which interesting information is derived from text through the discovery of patterns and trends. Text mining algorithms are used to guarantee the quality of extracted knowledge. However, the extracted patterns using text or data mining algorithms or methods leads to noisy patterns and inconsistency. Thus, different challenges arise, such as the question of how to understand these patterns, whether the model that has been used is suitable, and if all the patterns that have been extracted are relevant. Furthermore, the research raises the question of how to give a correct weight to the extracted knowledge. To address these issues, this paper presents a text post-processing method, which uses a pattern co-occurrence matrix to find the relation between extracted patterns in order to reduce noisy patterns. The main objective of this paper is not only reducing the number of closed sequential patterns, but also improving the performance of pattern mining as well. The experimental results on Reuters Corpus Volume 1 data collection and TREC filtering topics show that the proposed method is promising.
Resumo:
PURPOSE: To test the reliability of Timed Up and Go Tests (TUGTs) in cardiac rehabilitation (CR) and compare TUGTs to the 6-Minute Walk Test (6MWT) for outcome measurement. METHODS: Sixty-one of 154 consecutive community-based CR patients were prospectively recruited. Subjects undertook repeated TUGTs and 6MWTs at the start of CR (start-CR), postdischarge from CR (post-CR), and 6 months postdischarge from CR (6 months post-CR). The main outcome measurements were TUGT time (TUGTT) and 6MWT distance (6MWD). RESULTS: Mean (SD) TUGTT1 and TUGTT2 at the 3 assessments were 6.29 (1.30) and 5.94 (1.20); 5.81 (1.22) and 5.53 (1.09); and 5.39 (1.60) and 5.01 (1.28) seconds, respectively. A reduction in TUGTT occurred between each outcome point (P ≤ .002). Repeated TUGTTs were strongly correlated at each assessment, intraclass correlation (95% CI) = 0.85 (0.76–0.91), 0.84 (0.73–0.91), and 0.90 (0.83–0.94), despite a reduction between TUGTT1 and TUGTT2 of 5%, 5%, and 7%, respectively (P ≤ .006). Relative decreases in TUGTT1 (TUGTT2) occurred from start-CR to post-CR and from start-CR to 6 months post-CR of −7.5% (−6.9%) and −14.2% (−15.5%), respectively, while relative increases in 6MWD1 (6MWD2) occurred, 5.1% (7.2%) and 8.4% (10.2%), respectively (P < .001 in all cases). Pearson correlation coefficients for 6MWD1 to TUGTT1 and TUGTT2 across all times were −0.60 and −0.68 (P < .001) and the intraclass correlations (95% CI) for the speeds derived from averaged 6MWDs and TUGTTs were 0.65 (0.54, 0.73) (P < .001). CONCLUSIONS: Similar relative changes occurred for the TUGT and the 6MWT in CR. A significant correlation between the TUGTT and 6MWD was demonstrated, and we suggest that the TUGT may provide a related or a supplementary measurement of functional capacity in CR.
Resumo:
Russell, Benton and Kingsley (2010) recently suggested a new association football test comprising three different tasks for the evaluation of players' passing, dribbling and shooting skills. Their stated intention was to enhance ‘ecological validity’ of current association football skills tests allowing generalisation of results from the new protocols to performance constraints that were ‘representative’ of experiences during competitive game situations. However, in this comment we raise some concerns with their use of the term ‘ecological validity’ to allude to aspects of ‘representative task design’. We propose that in their paper the authors confused understanding of environmental properties, performance achievement and generalisability of the test and its outcomes. Here, we argue that the tests designed by Russell and colleagues did not include critical sources of environmental information, such as the active role of opponents, which players typically use to organise their actions during performance. Static tasks which are not representative of the competitive performance environment may lead to different emerging patterns of movement organisation and performance outcomes, failing to effectively evaluate skills performance in sport.
Resumo:
The power of testing for a population-wide association between a biallelic quantitative trait locus and a linked biallelic marker locus is predicted both empirically and deterministically for several tests. The tests were based on the analysis of variance (ANOVA) and on a number of transmission disequilibrium tests (TDT). Deterministic power predictions made use of family information, and were functions of population parameters including linkage disequilibrium, allele frequencies, and recombination rate. Deterministic power predictions were very close to the empirical power from simulations in all scenarios considered in this study. The different TDTs had very similar power, intermediate between one-way and nested ANOVAs. One-way ANOVA was the only test that was not robust against spurious disequilibrium. Our general framework for predicting power deterministically can be used to predict power in other association tests. Deterministic power calculations are a powerful tool for researchers to plan and evaluate experiments and obviate the need for elaborate simulation studies.
Resumo:
The Beauty Leaf tree (Calophyllum inophyllum) is a potential source of non-edible vegetable oil for producing future generation biodiesel because of its ability to grow in a wide range of climate conditions, easy cultivation, high fruit production rate, and the high oil content in the seed. This plant naturally occurs in the coastal areas of Queensland and the Northern Territory in Australia, and is also widespread in south-east Asia, India and Sri Lanka. Although Beauty Leaf is traditionally used as a source of timber and orientation plant, its potential as a source of second generation biodiesel is yet to be exploited. In this study, the extraction process from the Beauty Leaf oil seed has been optimised in terms of seed preparation, moisture content and oil extraction methods. The two methods that have been considered to extract oil from the seed kernel are mechanical oil extraction using an electric powered screw press, and chemical oil extraction using n-hexane as an oil solvent. The study found that seed preparation has a significant impact on oil yields, especially in the screw press extraction method. Kernels prepared to 15% moisture content provided the highest oil yields for both extraction methods. Mechanical extraction using the screw press can produce oil from correctly prepared product at a low cost, however overall this method is ineffective with relatively low oil yields. Chemical extraction was found to be a very effective method for oil extraction for its consistence performance and high oil yield, but cost of production was relatively higher due to the high cost of solvent. However, a solvent recycle system can be implemented to reduce the production cost of Beauty Leaf biodiesel. The findings of this study are expected to serve as the basis from which industrial scale biodiesel production from Beauty Leaf can be made.
Resumo:
A satellite based observation system can continuously or repeatedly generate a user state vector time series that may contain useful information. One typical example is the collection of International GNSS Services (IGS) station daily and weekly combined solutions. Another example is the epoch-by-epoch kinematic position time series of a receiver derived by a GPS real time kinematic (RTK) technique. Although some multivariate analysis techniques have been adopted to assess the noise characteristics of multivariate state time series, statistic testings are limited to univariate time series. After review of frequently used hypotheses test statistics in univariate analysis of GNSS state time series, the paper presents a number of T-squared multivariate analysis statistics for use in the analysis of multivariate GNSS state time series. These T-squared test statistics have taken the correlation between coordinate components into account, which is neglected in univariate analysis. Numerical analysis was conducted with the multi-year time series of an IGS station to schematically demonstrate the results from the multivariate hypothesis testing in comparison with the univariate hypothesis testing results. The results have demonstrated that, in general, the testing for multivariate mean shifts and outliers tends to reject less data samples than the testing for univariate mean shifts and outliers under the same confidence level. It is noted that neither univariate nor multivariate data analysis methods are intended to replace physical analysis. Instead, these should be treated as complementary statistical methods for a prior or posteriori investigations. Physical analysis is necessary subsequently to refine and interpret the results.
Resumo:
High-stakes literacy testing is now a ubiquitous educational phenomenon. However, it remains a relatively recent phenomenon in Australia. Hence it is possible to study the ways in which such tests are reorganising educators’ work during this period of change. This paper draws upon Dorothy Smith’s Institutional Ethnography and critical policy analysis to consider this problem and reports on interview data from teachers and the principal in small rural school in a poor area of South Australia. In this context high-stakes testing and the associated diagnostic school review unleashes a chain of actions within the school which ultimately results in educators doubting their professional judgments, increasing the investment in testing, narrowing their teaching of literacy and purchasing levelled reading schemes. The effects of high-stakes testing in disadvantaged schools are identified and discussed.
Resumo:
This paper presents the details of experimental studies on the shear behaviour and strength of lipped channel beams (LCBs). The LCB sections are commonly used as flexural members in residential, industrial and commercial buildings. To ensure safe and efficient designs of LCBs, many research studies have been undertaken on the flexural behaviour of LCBs. To date, however, limited research has been conducted into the strength of LCB sections subject to shear actions. Therefore a detailed experimental study involving 20 tests was undertaken to investigate the shear behaviour and strength of LCBs. This research has shown the presence of increased shear capacity of LCBs due to the additional fixity along the web to flange juncture, but the current design rules (AS/NZS 4600 and AISI) ignore this effect and were thus found to be conservative. Therefore they were modified by including a higher elastic shear buckling coefficient. Ultimate shear capacity results obtained from the shear tests were compared with the modified shear capacity design rules. It was found that they are still conservative as they ignore the presence of post-buckling strength. Hence the AS/NZS 4600 and AISI design rules were further modified to include the available post-buckling strength. Suitable design rules were also developed under the direct strength method (DSM) format. This paper presents the details of this study and the results including the modified shear design rules.
Resumo:
Classifier selection is a problem encountered by multi-biometric systems that aim to improve performance through fusion of decisions. A particular decision fusion architecture that combines multiple instances (n classifiers) and multiple samples (m attempts at each classifier) has been proposed in previous work to achieve controlled trade-off between false alarms and false rejects. Although analysis on text-dependent speaker verification has demonstrated better performance for fusion of decisions with favourable dependence compared to statistically independent decisions, the performance is not always optimal. Given a pool of instances, best performance with this architecture is obtained for certain combination of instances. Heuristic rules and diversity measures have been commonly used for classifier selection but it is shown that optimal performance is achieved for the `best combination performance' rule. As the search complexity for this rule increases exponentially with the addition of classifiers, a measure - the sequential error ratio (SER) - is proposed in this work that is specifically adapted to the characteristics of sequential fusion architecture. The proposed measure can be used to select a classifier that is most likely to produce a correct decision at each stage. Error rates for fusion of text-dependent HMM based speaker models using SER are compared with other classifier selection methodologies. SER is shown to achieve near optimal performance for sequential fusion of multiple instances with or without the use of multiple samples. The methodology applies to multiple speech utterances for telephone or internet based access control and to other systems such as multiple finger print and multiple handwriting sample based identity verification systems.