139 resultados para tuning without chirp
Resumo:
Objectives: This qualitative study canvassed residents' perceptions of the needs and barriers to the expression of sexuality in long-term care. Methods: Sixteen residents, including five with dementia, from six aged care facilities in two Australian states were interviewed. Data were analysed using a constant comparative method. Results: Four categories describe residents' views about sexuality, their needs and barriers to its expression: ‘It still matters’; ‘Reminiscence and resignation’, ‘It's personal’, and ‘It's an unconducive environment’. Discussion: Residents, including those with dementia, saw themselves as sexual beings and with a continuing need and desire to express their sexuality. The manner in which it was expressed varied. Many barriers to sexual expression were noted, including negative attitudes of staff, lack of privacy and limited opportunities for the establishment of new relationships or the continuation of old ones. Interviewees agreed that how a resident expressed their sexuality was their business and no one else's.
Resumo:
A Delay Tolerant Network (DTN) is one where nodes can be highly mobile, with long message delay times forming dynamic and fragmented networks. Traditional centralised network security is difficult to implement in such a network, therefore distributed security solutions are more desirable in DTN implementations. Establishing effective trust in distributed systems with no centralised Public Key Infrastructure (PKI) such as the Pretty Good Privacy (PGP) scheme usually requires human intervention. Our aim is to build and compare different de- centralised trust systems for implementation in autonomous DTN systems. In this paper, we utilise a key distribution model based on the Web of Trust principle, and employ a simple leverage of common friends trust system to establish initial trust in autonomous DTN’s. We compare this system with two other methods of autonomously establishing initial trust by introducing a malicious node and measuring the distribution of malicious and fake keys. Our results show that the new trust system not only mitigates the distribution of fake malicious keys by 40% at the end of the simulation, but it also improved key distribution between nodes. This paper contributes a comparison of three de-centralised trust systems that can be employed in autonomous DTN systems.
Resumo:
Power system stabilizer (PSS) is one of the most important controllers in modern power systems for damping low frequency oscillations. Many efforts have been dedicated to design the tuning methodologies and allocation techniques to obtain optimal damping behaviors of the system. Traditionally, it is tuned mostly for local damping performance, however, in order to obtain a globally optimal performance, the tuning of PSS needs to be done considering more variables. Furthermore, with the enhancement of system interconnection and the increase of system complexity, new tools are required to achieve global tuning and coordination of PSS to achieve optimal solution in a global meaning. Differential evolution (DE) is a recognized as a simple and powerful global optimum technique, which can gain fast convergence speed as well as high computational efficiency. However, as many other evolutionary algorithms (EA), the premature of population restricts optimization capacity of DE. In this paper, a modified DE is proposed and applied for optimal PSS tuning of 39-Bus New-England system. New operators are introduced to reduce the probability of getting premature. To investigate the impact of system conditions on PSS tuning, multiple operating points will be studied. Simulation result is compared with standard DE and particle swarm optimization (PSO).
Resumo:
Purpose: To determine whether neuroretinal function differs in healthy persons with and without common risk gene variants for age- related macular degeneration (AMD) and no ophthalmoscopic signs of AMD, and to compare those findings in persons with manifest early AMD. Methods and Participants: Neuroretinal function was assessed with the multifocal electroretinogram (mfERG) (VERIS, Redwood City, CA,) in 32 participants (22 healthy persons with no clinical signs of AMD and 10 early AMD patients). The 22 healthy participants with no AMD were risk genotypes for either the CFH (rs380390) and/or ARMS2 (rs10490920). We used a slow flash mfERG paradigm (3 inserted frames) and a 103 hexagon stimulus array. Recordings were made with DTL electrodes; fixation and eye movements were monitored online. Trough N1 to peak P1 (N1P1) response densities and P1-implicit times (IT) were analysed in 5 concentric rings. Results: N1P1 response densities (mean ± SD) for concentric rings 1-3 were on average significantly higher in at-risk genotypes (ring 1: 17.97 nV/deg2 ± 1.9, ring 2: 11.7 nV/deg2 ±1.3, ring 3: 8.7 nV/deg2 ± 0.7) compared to those without risk (ring 1: 13.7 nV/deg2 ± 1.9, ring 2: 9.2 nV/deg2 ±0.8, ring 3: 7.3 nV/deg2 ± 1.1) and compared to persons with early AMD (ring 1: 15.3 nV/deg2 ± 4.8, ring 2: 9.1 nV/deg2 ±2.3, ring 3 nV/deg2: 7.3± 1.3) (p<0.5). The group implicit times, P1-ITs for ring 1 were on average delayed in the early AMD patients (36.4 ms ± 1.0) compared to healthy participants with (35.1 ms ± 1.1) or without risk genotypes (34.8 ms ±1.3), although these differences were not significant. Conclusion: Neuroretinal function in persons with normal fundi can be differentiated into subgroups based on their genetics. Increased neuroretinal activity in persons who carry AMD risk genotypes may be due to genetically determined subclinical inflammatory and/or histological changes in the retina. Assessment of neuroretinal function in healthy persons genetically susceptible to AMD may be a useful early biomarker before there is clinical manifestation of AMD.
Resumo:
This paper proposes a self-tuning feedforward active noise control (ANC) system with online secondary path modeling. The step-size parameters of the controller and modeling filters have crucial rule on the system performance. In literature, these parameters are adjusted by trial-and-error. In other words, they are manually initialized before system starting, which require performing extensive experiments to ensure the convergence of the system. Hence there is no guarantee that the system could perform well under different situations. In the proposed method, the appropriate values for the step-sizes are obtained automatically. Computer simulation results indicate the effectiveness of the proposed method.
Resumo:
Most existing research on maintenance optimisation for multi-component systems only considers the lifetime distribution of the components. When the condition-based maintenance (CBM) strategy is adopted for multi-component systems, the strategy structure becomes complex due to the large number of component states and their combinations. Consequently, some predetermined maintenance strategy structures are often assumed before the maintenance optimisation of a multi-component system in a CBM context. Developing these predetermined strategy structure needs expert experience and the optimality of these strategies is often not proofed. This paper proposed a maintenance optimisation method that does not require any predetermined strategy structure for a two-component series system. The proposed method is developed based on the semi-Markov decision process (SMDP). A simulation study shows that the proposed method can identify the optimal maintenance strategy adaptively for different maintenance costs and parameters of degradation processes. The optimal maintenance strategy structure is also investigated in the simulation study, which provides reference for further research in maintenance optimisation of multi-component systems.
Resumo:
The main theme of this thesis is to allow the users of cloud services to outsource their data without the need to trust the cloud provider. The method is based on combining existing proof-of-storage schemes with distance-bounding protocols. Specifically, cloud customers will be able to verify the confidentiality, integrity, availability, fairness (or mutual non-repudiation), data freshness, geographic assurance and replication of their stored data directly, without having to rely on the word of the cloud provider.
Resumo:
OBJECTIVE: To investigate the role of the dopamine receptor genes, DRD1, DRD3, and DRD5 in the pathogenesis of migraine. BACKGROUND: Migraine is a chronic debilitating disorder affecting approximately 12% of the white population. The disease shows strong familial aggregation and presumably has a genetic basis, but at present, the type and number of genes involved is unclear. The study of candidate genes can prove useful in the identification of genes involved in complex diseases such as migraine, especially if the contribution of the gene to phenotypic expression is minor. Genes coding for proteins involved in dopamine metabolism have been implicated in a number of neurologic conditions and may play a contributory role in migraine. Hence, genes that code for enzymes and receptors modulating dopaminergic activity are good candidates for investigation of the molecular genetic basis of migraine. METHODS: We tested 275 migraineurs and 275 age- and sex-matched individuals free of migraine. Genotypic results were determined by restriction endonuclease digestion of polymerase chain reaction products to detect DRD1 and DRD3 alleles and by Genescan analysis after polymerase chain reaction using fluorescently labelled oligonucleotide primers for the DRD5 marker. RESULTS: Results of chi-square statistical analyses indicated that the allele distribution for migraine cases compared to controls was not significantly different for any of the three tested gene markers (chi2 = 0.1, P =.74 for DRD1; chi2 = 1.8, P =.18 for DRD3; and chi2 = 20.3, P =.08 for DRD5). CONCLUSIONS: These findings offer no evidence for allelic association between the tested dopamine receptor gene polymorphisms and the more prevalent forms of migraine and, therefore, do not support a role for these genes in the pathogenesis of the disorder.
Resumo:
Reliability of the performance of biometric identity verification systems remains a significant challenge. Individual biometric samples of the same person (identity class) are not identical at each presentation and performance degradation arises from intra-class variability and inter-class similarity. These limitations lead to false accepts and false rejects that are dependent. It is therefore difficult to reduce the rate of one type of error without increasing the other. The focus of this dissertation is to investigate a method based on classifier fusion techniques to better control the trade-off between the verification errors using text-dependent speaker verification as the test platform. A sequential classifier fusion architecture that integrates multi-instance and multisample fusion schemes is proposed. This fusion method enables a controlled trade-off between false alarms and false rejects. For statistically independent classifier decisions, analytical expressions for each type of verification error are derived using base classifier performances. As this assumption may not be always valid, these expressions are modified to incorporate the correlation between statistically dependent decisions from clients and impostors. The architecture is empirically evaluated by applying the proposed architecture for text dependent speaker verification using the Hidden Markov Model based digit dependent speaker models in each stage with multiple attempts for each digit utterance. The trade-off between the verification errors is controlled using the parameters, number of decision stages (instances) and the number of attempts at each decision stage (samples), fine-tuned on evaluation/tune set. The statistical validation of the derived expressions for error estimates is evaluated on test data. The performance of the sequential method is further demonstrated to depend on the order of the combination of digits (instances) and the nature of repetitive attempts (samples). The false rejection and false acceptance rates for proposed fusion are estimated using the base classifier performances, the variance in correlation between classifier decisions and the sequence of classifiers with favourable dependence selected using the 'Sequential Error Ratio' criteria. The error rates are better estimated by incorporating user-dependent (such as speaker-dependent thresholds and speaker-specific digit combinations) and class-dependent (such as clientimpostor dependent favourable combinations and class-error based threshold estimation) information. The proposed architecture is desirable in most of the speaker verification applications such as remote authentication, telephone and internet shopping applications. The tuning of parameters - the number of instances and samples - serve both the security and user convenience requirements of speaker-specific verification. The architecture investigated here is applicable to verification using other biometric modalities such as handwriting, fingerprints and key strokes.
Resumo:
Discipline boundaries of science and technology education are inevitable. Often, such barriers are an obstacle to industry-based learning leading to preventable complexities. Industry-based learning is a complex scenario, rather than conventional learning, leading to the study of liquid learning, which is a timely concept to investigate learning without boundaries. Liquid learning consists of accountability, expectations and driven by outcomes with different learning choices. Liquid learning is a significant phenomenon requiring awareness in the science and technology education. This paper aims to discuss some practical issues when designing industry-based learning without boundaries. A case study approach is reviewed and presented.
Resumo:
This paper explores the rationale, experience and impact of thirteen Australia and New Zealand universities that have integrated the Engineers Without Borders (EWB) challenge into their first year engineering curriculum. EWB is a national competition for university students, who work in teams to develop conceptual designs for real sustainable development projects across the globe. This project investigated “what works and what doesn’t” in engineering curriculum renewal, utilising content analysis, multiple in-depth interviews with students and staff (coordinators, lecturers, tutors) and observation. EWB comprises between 25 to 100% of the total assessment items. This paper specifically focuses on student’s experience of EWB, documenting how the project teaches sustainability and systems-thinking approaches, engages students with different cultures, and fosters teamwork, new ways of thinking and communication skills. We identify key benefits and challenges of EWB, as well as mechanisms and contexts that foster student engagement and learning outcomes.
Resumo:
Background: This open-label, randomised phase III study was designed to further investigate the clinical activity and safety of SRL172 (killed Mycobacterium vaccae suspension) with chemotherapy in the treatment of non-small-cell lung cancer (NSCLC). Patients and methods: Patients were randomised to receive platinum-based chemotherapy, consisting of up to six cycles of MVP (mitomycin, vinblastine and cisplatin or carboplatin) with (210 patients) or without (209 patients) monthly SRL172. Results: There was no statistical difference between the two groups in overall survival (primary efficacy end point) over the course of the study (median overall survival of 223 days versus 225 days; P = 0.65). However, a higher proportion of patients were alive at the end of the 15-week treatment phase in the chemotherapy plus SRL172 group (90%), than in the chemotherapy alone group (83%) (P = 0.061). At the end of the treatment phase, the response rate was 37% in the combined group and 33% in the chemotherapy alone group. Patients in the chemotherapy alone group had greater deterioration in their Global Health Status score (-14.3) than patients in the chemotherapy plus SRL172 group (-6.6) (P = 0.02). Conclusion: In this non-placebo controlled trial, SRL172 when added to standard cancer chemotherapy significantly improved patient quality of life without affecting overall survival times. © 2004 European Society for Medical Oncology.
Resumo:
Whole-image descriptors such as GIST have been used successfully for persistent place recognition when combined with temporal filtering or sequential filtering techniques. However, whole-image descriptor localization systems often apply a heuristic rather than a probabilistic approach to place recognition, requiring substantial environmental-specific tuning prior to deployment. In this paper we present a novel online solution that uses statistical approaches to calculate place recognition likelihoods for whole-image descriptors, without requiring either environmental tuning or pre-training. Using a real world benchmark dataset, we show that this method creates distributions appropriate to a specific environment in an online manner. Our method performs comparably to FAB-MAP in raw place recognition performance, and integrates into a state of the art probabilistic mapping system to provide superior performance to whole-image methods that are not based on true probability distributions. The method provides a principled means for combining the powerful change-invariant properties of whole-image descriptors with probabilistic back-end mapping systems without the need for prior training or system tuning.