410 resultados para PROCESSING TECHNIQUE
Resumo:
Studies of orthographic skills transfer between languages focus mostly on working memory (WM) ability in alphabetic first language (L1) speakers when learning another, often alphabetically congruent, language. We report two studies that, instead, explored the transferability of L1 orthographic processing skills in WM in logographic-L1 and alphabetic-L1 speakers. English-French bilingual and English monolingual (alphabetic-L1) speakers, and Chinese-English (logographic-L1) speakers, learned a set of artificial logographs and associated meanings (Study 1). The logographs were used in WM tasks with and without concurrent articulatory or visuo-spatial suppression. The logographic-L1 bilinguals were markedly less affected by articulatory suppression than alphabetic-L1 monolinguals (who did not differ from their bilingual peers). Bilinguals overall were less affected by spatial interference, reflecting superior phonological processing skills or, conceivably, greater executive control. A comparison of span sizes for meaningful and meaningless logographs (Study 2) replicated these findings. However, the logographic-L1 bilinguals’ spans in L1 were measurably greater than those of their alphabetic-L1 (bilingual and monolingual) peers; a finding unaccounted for by faster articulation rates or differences in general intelligence. The overall pattern of results suggests an advantage (possibly perceptual) for logographic-L1 speakers, over and above the bilingual advantage also seen elsewhere in third language (L3) acquisition.
Resumo:
This paper develops and evaluates an enhanced corpus based approach for semantic processing. Corpus based models that build representations of words directly from text do not require pre-existing linguistic knowledge, and have demonstrated psychologically relevant performance on a number of cognitive tasks. However, they have been criticised in the past for not incorporating sufficient structural information. Using ideas underpinning recent attempts to overcome this weakness, we develop an enhanced tensor encoding model to build representations of word meaning for semantic processing. Our enhanced model demonstrates superior performance when compared to a robust baseline model on a number of semantic processing tasks.
Resumo:
Most unsignalised intersection capacity calculation procedures are based on gap acceptance models. Accuracy of critical gap estimation affects accuracy of capacity and delay estimation. Several methods have been published to estimate drivers’ sample mean critical gap, the Maximum Likelihood Estimation (MLE) technique regarded as the most accurate. This study assesses three novel methods; Average Central Gap (ACG) method, Strength Weighted Central Gap method (SWCG), and Mode Central Gap method (MCG), against MLE for their fidelity in rendering true sample mean critical gaps. A Monte Carlo event based simulation model was used to draw the maximum rejected gap and accepted gap for each of a sample of 300 drivers across 32 simulation runs. Simulation mean critical gap is varied between 3s and 8s, while offered gap rate is varied between 0.05veh/s and 0.55veh/s. This study affirms that MLE provides a close to perfect fit to simulation mean critical gaps across a broad range of conditions. The MCG method also provides an almost perfect fit and has superior computational simplicity and efficiency to the MLE. The SWCG method performs robustly under high flows; however, poorly under low to moderate flows. Further research is recommended using field traffic data, under a variety of minor stream and major stream flow conditions for a variety of minor stream movement types, to compare critical gap estimates using MLE against MCG. Should the MCG method prove as robust as MLE, serious consideration should be given to its adoption to estimate critical gap parameters in guidelines.
Resumo:
Structural health monitoring (SHM) refers to the procedure used to assess the condition of structures so that their performance can be monitored and any damage can be detected early. Early detection of damage and appropriate retrofitting will aid in preventing failure of the structure and save money spent on maintenance or replacement and ensure the structure operates safely and efficiently during its whole intended life. Though visual inspection and other techniques such as vibration based ones are available for SHM of structures such as bridges, the use of acoustic emission (AE) technique is an attractive option and is increasing in use. AE waves are high frequency stress waves generated by rapid release of energy from localised sources within a material, such as crack initiation and growth. AE technique involves recording these waves by means of sensors attached on the surface and then analysing the signals to extract information about the nature of the source. High sensitivity to crack growth, ability to locate source, passive nature (no need to supply energy from outside, but energy from damage source itself is utilised) and possibility to perform real time monitoring (detecting crack as it occurs or grows) are some of the attractive features of AE technique. In spite of these advantages, challenges still exist in using AE technique for monitoring applications, especially in the area of analysis of recorded AE data, as large volumes of data are usually generated during monitoring. The need for effective data analysis can be linked with three main aims of monitoring: (a) accurately locating the source of damage; (b) identifying and discriminating signals from different sources of acoustic emission and (c) quantifying the level of damage of AE source for severity assessment. In AE technique, the location of the emission source is usually calculated using the times of arrival and velocities of the AE signals recorded by a number of sensors. But complications arise as AE waves can travel in a structure in a number of different modes that have different velocities and frequencies. Hence, to accurately locate a source it is necessary to identify the modes recorded by the sensors. This study has proposed and tested the use of time-frequency analysis tools such as short time Fourier transform to identify the modes and the use of the velocities of these modes to achieve very accurate results. Further, this study has explored the possibility of reducing the number of sensors needed for data capture by using the velocities of modes captured by a single sensor for source localization. A major problem in practical use of AE technique is the presence of sources of AE other than crack related, such as rubbing and impacts between different components of a structure. These spurious AE signals often mask the signals from the crack activity; hence discrimination of signals to identify the sources is very important. This work developed a model that uses different signal processing tools such as cross-correlation, magnitude squared coherence and energy distribution in different frequency bands as well as modal analysis (comparing amplitudes of identified modes) for accurately differentiating signals from different simulated AE sources. Quantification tools to assess the severity of the damage sources are highly desirable in practical applications. Though different damage quantification methods have been proposed in AE technique, not all have achieved universal approval or have been approved as suitable for all situations. The b-value analysis, which involves the study of distribution of amplitudes of AE signals, and its modified form (known as improved b-value analysis), was investigated for suitability for damage quantification purposes in ductile materials such as steel. This was found to give encouraging results for analysis of data from laboratory, thereby extending the possibility of its use for real life structures. By addressing these primary issues, it is believed that this thesis has helped improve the effectiveness of AE technique for structural health monitoring of civil infrastructures such as bridges.
Resumo:
This paper proposes the use of battery energy storage (BES) system for the grid-connected doubly fed induction generator (DFIG). The BES would help in storing/releasing additional power in case of higher/lower wind speed to maintain constant grid power. The DC link capacitor is replaced with the BES system in a DFIG-based wind turbine to achieve the above-mentioned goal. The control scheme is modified and the co-ordinated tuning of the associated controllers to enhance the damping of the oscillatory modes is presented using bacterial foraging technique. The results from eigenvalue analysis and the time domain simulation studies are presented to elucidate the effectiveness of the BES systems in maintaining the grid stability under normal operation.
Resumo:
To develop a rapid optimized technique of wide-field imaging of the human corneal subbasal nerve plexus. A dynamic fixation target was developed and, coupled with semiautomated tiling software, a rapid method of capturing and montaging multiple corneal confocal microscopy images was created. To illustrate the utility of this technique, wide-field maps of the subbasal nerve plexus were produced in 2 participants with diabetes, 1 with and 1 without neuropathy. The technique produced montages of the central 3 mm of the subbasal corneal nerve plexus. The maps seem to show a general reduction in the number of nerve fibers and branches in the diabetic participant with neuropathy compared with the individual without neuropathy. This novel technique will allow more routine and widespread use of subbasal nerve plexus mapping in clinical and research situations. The significant reduction in the time to image the corneal subbasal nerve plexus should expedite studies of larger groups of diabetic patients and those with other conditions affecting nerve fibers. The inferior whorl and the surrounding areas may show the greatest loss of nerve fibers in individuals with diabetic neuropathy, but this should be further investigated in a larger cohort.
Resumo:
In 2010, the State Library of Queensland (SLQ) donated their out-of-copyright Queensland images to Wikimedia Commons. One direct effect of publishing the collections at Wikimedia Commons is the ability of general audiences to participate and help the library in processing the images in the collection. This paper will discuss a project that explored user participation in the categorisation of the State Library of Queensland digital image collections. The outcomes of this project can be used to gain a better understanding of user participation that lead to improving access to library digital collections. Two techniques for data collection were used: documents analysis and interview. Document analysis was performed on the Wikimedia Commons monthly reports. Meanwhile, interview was used as the main data collection technique in this research. The data collected from document analysis was used to help the researchers to devise appropriate questions for interviews. The interviews were undertaken with participants who were divided into two groups: SLQ staff members and Wikimedians (users who participate in Wikimedia). The two sets of data collected from participants were analysed independently and compared. This method was useful for the researchers to understand the differences between the experiences of categorisation from both the librarians’ and the users’ perspectives. This paper will provide a discussion on the preliminary findings that have emerged from each group participant. This research provides preliminary information about the extent of user participation in the categorisation of SLQ collections in Wikimedia Commons that can be used by SLQ and other interested libraries in describing their digital content by their categorisations to improve user access to the collection in the future.
Resumo:
Today’s highly competitive market influences the manufacturing industry to improve their production systems to become the optimal system in the shortest cycle time as possible. One of most common problems in manufacturing systems is the assembly line balancing problem. The assembly line balancing problem involves task assignments to workstations with optimum line efficiency. The line balancing technique, namely “COMSOAL”, is an abbreviation of “Computer Method for Sequencing Operations for Assembly Lines”. Arcus initially developed the COMSOAL technique in 1966 [1], and it has been mainly applied to solve assembly line balancing problems [6]. The most common purposes of COMSOAL are to minimise idle time, optimise production line efficiency, and minimise the number of workstations. Therefore, this project will implement COMSOAL to balance an assembly line in the motorcycle industry. The new solution by COMSOAL will be used to compare with the previous solution that was developed by Multi‐Started Neighborhood Search Heuristic (MSNSH), which will result in five aspects including cycle time, total idle time, line efficiency, average daily productivity rate, and the workload balance. The journal name “Optimising and simulating the assembly line balancing problem in a motorcycle manufacturing company: a case study” will be used as the case study for this project [5].
Resumo:
The technique of femoral cement-in-cement revision is well established, but there are no previous series reporting its use on the acetabular side at the time of revision total hip arthroplasty. We describe the surgical technique and report the outcome of 60 consecutive cement-in-cement revisions of the acetabular component at a mean follow-up of 8.5 years (range 5-12 years). All had a radiologically and clinically well fixed acetabular cement mantle at the time of revision. 29 patients died. No case was lost to follow-up. The 2 most common indications for acetabular revision were recurrent dislocation (77%) and to compliment a femoral revision (20%). There were 2 cases of aseptic cup loosening (3.3%) requiring re-revision. No other hip was clinically or radiologically loose (96.7%) at latest follow-up. One case was re-revised for infection, 4 for recurrent dislocation and 1 for disarticulation of a constrained component. At 5 years, the Kaplan-Meier survival rate was 100% for aseptic loosening and 92.2% (95% CI; 84.8-99.6%) with revision for all causes as the endpoint. These results support the use of the cement-in-cement revision technique in appropriate cases on the acetabular side. Theoretical advantages include preservation of bone stock, reduced operating time, reduced risk of complications and durable fixation.
Resumo:
The rank transform is a non-parametric technique which has been recently proposed for the stereo matching problem. The motivation behind its application to the matching problem is its invariance to certain types of image distortion and noise, as well as its amenability to real-time implementation. This paper derives an analytic expression for the process of matching using the rank transform, and then goes on to derive one constraint which must be satisfied for a correct match. This has been dubbed the rank order constraint or simply the rank constraint. Experimental work has shown that this constraint is capable of resolving ambiguous matches, thereby improving matching reliability. This constraint was incorporated into a new algorithm for matching using the rank transform. This modified algorithm resulted in an increased proportion of correct matches, for all test imagery used.
Resumo:
A fundamental problem faced by stereo matching algorithms is the matching or correspondence problem. A wide range of algorithms have been proposed for the correspondence problem. For all matching algorithms, it would be useful to be able to compute a measure of the probability of correctness, or reliability of a match. This paper focuses in particular on one class for matching algorithms, which are based on the rank transform. The interest in these algorithms for stereo matching stems from their invariance to radiometric distortion, and their amenability to fast hardware implementation. This work differs from previous work in that it derives, from first principles, an expression for the probability of a correct match. This method was based on an enumeration of all possible symbols for matching. The theoretical results for disparity error prediction, obtained using this method, were found to agree well with experimental results. However, disadvantages of the technique developed in this chapter are that it is not easily applicable to real images, and also that it is too computationally expensive for practical window sizes. Nevertheless, the exercise provides an interesting and novel analysis of match reliability.
Resumo:
An iterative based strategy is proposed for finding the optimal rating and location of fixed and switched capacitors in distribution networks. The substation Load Tap Changer tap is also set during this procedure. A Modified Discrete Particle Swarm Optimization is employed in the proposed strategy. The objective function is composed of the distribution line loss cost and the capacitors investment cost. The line loss is calculated using estimation of the load duration curve to multiple levels. The constraints are the bus voltage and the feeder current which should be maintained within their standard range. For validation of the proposed method, two case studies are tested. The first case study is the semi-urban 37-bus distribution system which is connected at bus 2 of the Roy Billinton Test System which is located in the secondary side of a 33/11 kV distribution substation. The second case is a 33 kV distribution network based on the modification of the 18-bus IEEE distribution system. The results are compared with prior publications to illustrate the accuracy of the proposed strategy.
Resumo:
The Attentional Control Theory (ACT) proposes that high-anxious individuals maintain performance effectiveness (accuracy) at the expense of processing efficiency (response time), in particular, the two central executive functions of inhibition and shifting. In contrast, research has generally failed to consider the third executive function which relates to the function of updating. In the current study, seventy-five participants completed the Parametric Go/No-Go and n-back tasks, as well as the State-Trait Anxiety Inventory in order to explore the effects of anxiety on attention. Results indicated that anxiety lead to decay in processing efficiency, but not in performance effectiveness, across all three Central Executive functions (inhibition, set-shifting and updating). Interestingly, participants with high levels of trait anxiety also exhibited impaired performance effectiveness on the n-back task designed to measure the updating function. Findings are discussed in relation to developing a new model of ACT that also includes the role of preattentive processes and dual-task coordination when exploring the effects of anxiety on task performance.
Resumo:
This paper presents a novel technique for segmenting an audio stream into homogeneous regions according to speaker identities, background noise, music, environmental and channel conditions. Audio segmentation is useful in audio diarization systems, which aim to annotate an input audio stream with information that attributes temporal regions of the audio into their specific sources. The segmentation method introduced in this paper is performed using the Generalized Likelihood Ratio (GLR), computed between two adjacent sliding windows over preprocessed speech. This approach is inspired by the popular segmentation method proposed by the pioneering work of Chen and Gopalakrishnan, using the Bayesian Information Criterion (BIC) with an expanding search window. This paper will aim to identify and address the shortcomings associated with such an approach. The result obtained by the proposed segmentation strategy is evaluated on the 2002 Rich Transcription (RT-02) Evaluation dataset, and a miss rate of 19.47% and a false alarm rate of 16.94% is achieved at the optimal threshold.
Resumo:
This paper proposes the use of Bayesian approaches with the cross likelihood ratio (CLR) as a criterion for speaker clustering within a speaker diarization system, using eigenvoice modeling techniques. The CLR has previously been shown to be an effective decision criterion for speaker clustering using Gaussian mixture models. Recently, eigenvoice modeling has become an increasingly popular technique, due to its ability to adequately represent a speaker based on sparse training data, as well as to provide an improved capture of differences in speaker characteristics. The integration of eigenvoice modeling into the CLR framework to capitalize on the advantage of both techniques has also been shown to be beneficial for the speaker clustering task. Building on that success, this paper proposes the use of Bayesian methods to compute the conditional probabilities in computing the CLR, thus effectively combining the eigenvoice-CLR framework with the advantages of a Bayesian approach to the diarization problem. Results obtained on the 2002 Rich Transcription (RT-02) Evaluation dataset show an improved clustering performance, resulting in a 33.5% relative improvement in the overall Diarization Error Rate (DER) compared to the baseline system.