937 resultados para area-based matching
Resumo:
This paper presents a new approach to speech enhancement from single-channel measurements involving both noise and channel distortion (i.e., convolutional noise), and demonstrates its applications for robust speech recognition and for improving noisy speech quality. The approach is based on finding longest matching segments (LMS) from a corpus of clean, wideband speech. The approach adds three novel developments to our previous LMS research. First, we address the problem of channel distortion as well as additive noise. Second, we present an improved method for modeling noise for speech estimation. Third, we present an iterative algorithm which updates the noise and channel estimates of the corpus data model. In experiments using speech recognition as a test with the Aurora 4 database, the use of our enhancement approach as a preprocessor for feature extraction significantly improved the performance of a baseline recognition system. In another comparison against conventional enhancement algorithms, both the PESQ and the segmental SNR ratings of the LMS algorithm were superior to the other methods for noisy speech enhancement.
Resumo:
This paper presents a new approach to single-channel speech enhancement involving both noise and channel distortion (i.e., convolutional noise). The approach is based on finding longest matching segments (LMS) from a corpus of clean, wideband speech. The approach adds three novel developments to our previous LMS research. First, we address the problem of channel distortion as well as additive noise. Second, we present an improved method for modeling noise. Third, we present an iterative algorithm for improved speech estimates. In experiments using speech recognition as a test with the Aurora 4 database, the use of our enhancement approach as a preprocessor for feature extraction significantly improved the performance of a baseline recognition system. In another comparison against conventional enhancement algorithms, both the PESQ and the segmental SNR ratings of the LMS algorithm were superior to the other methods for noisy speech enhancement. Index Terms: corpus-based speech model, longest matching segment, speech enhancement, speech recognition
Resumo:
Traditional experimental economics methods often consume enormous resources of qualified human participants, and the inconsistence of a participant’s decisions among repeated trials prevents investigation from sensitivity analyses. The problem can be solved if computer agents are capable of generating similar behaviors as the given participants in experiments. An experimental economics based analysis method is presented to extract deep information from questionnaire data and emulate any number of participants. Taking the customers’ willingness to purchase electric vehicles (EVs) as an example, multi-layer correlation information is extracted from a limited number of questionnaires. Multi-agents mimicking the inquired potential customers are modelled through matching the probabilistic distributions of their willingness embedded in the questionnaires. The authenticity of both the model and the algorithm is validated by comparing the agent-based Monte Carlo simulation results with the questionnaire-based deduction results. With the aid of agent models, the effects of minority agents with specific preferences on the results are also discussed.
Resumo:
This paper introduces a novel load sharing algorithm to enable island synchronization. The system model used for development is based on an actual system for which historical measurement and fault data is available and is used to refine and test the algorithms performance and validity. The electrical system modelled is selected due to its high-level of hydroelectric generation and its history of islanding events. The process of developing the load sharing algorithm includes a number of steps. Firstly, the development of a simulation model to represent the case study accurately - this is validated by way of matching system behavior based on data from historical island events. Next, a generic island simulation is used to develop the load sharing algorithm. The algorithm is then tested against the validated simulation model representing the case study area selected. Finally, a laboratory setup is described which is used as validation method for the novel load sharing algorithm.
Resumo:
To evaluate the performance of the co-channel transmission based communication, we propose a new metric for area spectral efficiency (ASE) of interference limited ad-hoc network by assuming that the nodes are randomly distributed according to a Poisson point processes (PPP). We introduce a utility function, U = ASE/delay and derive the optimal ALOHA transmission probability p and the SIR threshold τ that jointly maximize the ASE and minimize the local delay. Finally, numerical results have been conducted to confirm that the joint optimization based on the U metric achieves a significant performance gain compared to conventional systems.
Resumo:
BACKGROUND: Prior research on community-based specialist palliative care teams used outcome measures of place of death and/or dichotomous outcome measures of acute care use in the last two weeks of life. However, existing research seldom measured the diverse places of care used and their timing prior to death.
OBJECTIVE: The study objective was to examine the place of care in the last 30 days of life.
METHODS: In this retrospective cohort study, patients who received care from a specialist palliative care team (exposed) were matched by propensity score to patients who received usual care in the community (unexposed) in Ontario, Canada. Measured was the percentage of patients in each place of care in the last month of life as a proportion of the total cohort.
RESULTS: After matching, 3109 patients were identified in each group, where 79% had cancer and 77% received end-of-life home care. At 30 days compared to 7 days before death, the exposed group's proportions rose from 33% to 41% receiving home care and 14% to 15% in hospital, whereas the unexposed group's proportions rose from 28% to 32% receiving home care and 16% to 22% in hospital. Linear trend analysis (proportion over time) showed that the exposed group used significantly more home care services and fewer hospital days (p < 0.001) than the unexposed group. On the last day of life (place of death), the exposed group had 18% die in an in-patient hospital bed compared to 29% in usual care.
CONCLUSION: Examining place of care in the last month can effectively illustrate the service use trajectory over time.
Resumo:
The area and power consumption of low-density parity check (LDPC) decoders are typically dominated by embedded memories. To alleviate such high memory costs, this paper exploits the fact that all internal memories of a LDPC decoder are frequently updated with new data. These unique memory access statistics are taken advantage of by replacing all static standard-cell based memories (SCMs) of a prior-art LDPC decoder implementation by dynamic SCMs (D-SCMs), which are designed to retain data just long enough to guarantee reliable operation. The use of D-SCMs leads to a 44% reduction in silicon area of the LDPC decoder compared to the use of static SCMs. The low-power LDPC decoder architecture with refresh-free D-SCMs was implemented in a 90nm CMOS process, and silicon measurements show full functionality and an information bit throughput of up to 600 Mbps (as required by the IEEE 802.11n standard).
Resumo:
In forensic investigations, it is common for forensic investigators to obtain a photograph of evidence left at the scene of crimes to aid them catch the culprit(s). Although, fingerprints are the most popular evidence that can be used, scene of crime officers claim that more than 30% of the evidence recovered from crime scenes originate from palms. Usually, palmprints evidence left at crime scenes are partial since very rarely full palmprints are obtained. In particular, partial palmprints do not exhibit a structured shape and often do not contain a reference point that can be used for their alignment to achieve efficient matching. This makes conventional matching methods based on alignment and minutiae pairing, as used in fingerprint recognition, to fail in partial palmprint recognition problems. In this paper a new partial-to-full palmprint recognition based on invariant minutiae descriptors is proposed where the partial palmprint’s minutiae are extracted and considered as the distinctive and discriminating features for each palmprint image. This is achieved by assigning to each minutiae a feature descriptor formed using the values of all the orientation histograms of the minutiae at hand. This allows for the descriptors to be rotation invariant and as such do not require any image alignment at the matching stage. The results obtained show that the proposed technique yields a recognition rate of 99.2%. The solution does give a high confidence to the judicial jury in their deliberations and decision.
Resumo:
Carbons are the main electrode materials used in supercapacitors, which are electrochemical energy storage devices with high power densities and long cycling lifetimes. However, increasing their energy density capacity will improve their potential for commercial implementation.
In this regard, the use of high surface area carbons and high voltage electrolytes are well known strategies to increase the attainable energy density, and lately ionic liquids have been explored as promising alternatives to current state of the art acetonitrile-based electrolytes. Also, in terms of safety and sustainability ionic liquids are attractive electrolyte materials for supercapacitors. In addition, it has been shown that the matching of the carbon pore size with the electrolyte ion size further increases the attainable electrochemical double layer (ECDL) capacitance and energy density.
The use of pseudocapacitive reactions can significantly increase the attainable energy density, and quinonic-based materials offer a potentially sustainable and cost effective research avenue for both the electrode and the electrolyte.
This perspective will provide an overview of the current state of the art research on supercapacitors based on combinations of carbons, ionic liquids and quinonic compounds, highlighting performances and challenges and discussing possible future research avenues. In this regard, current interest is mainly focused on strategies which may ultimately lead to commercially competitive sustainable high performance supercapacitors for different applications including those requiring mechanical flexibility and biocompatibility.
Resumo:
Current data-intensive image processing applications push traditional embedded architectures to their limits. FPGA based hardware acceleration is a potential solution but the programmability gap and time consuming HDL design flow is significant. The proposed research approach to develop “FPGA based programmable hardware acceleration platform” that uses, large number of Streaming Image processing Processors (SIPPro) potentially addresses these issues. SIPPro is pipelined in-order soft-core processor architecture with specific optimisations for image processing applications. Each SIPPro core uses 1 DSP48, 2 Block RAMs and 370 slice-registers, making the processor as compact as possible whilst maintaining flexibility and programmability. It is area efficient, scalable and high performance softcore architecture capable of delivering 530 MIPS per core using Xilinx Zynq SoC (ZC7Z020-3). To evaluate the feasibility of the proposed architecture, a Traffic Sign Recognition (TSR) algorithm has been prototyped on a Zedboard with the color and morphology operations accelerated using multiple SIPPros. Simulation and experimental results demonstrate that the processing platform is able to achieve a speedup of 15 and 33 times for color filtering and morphology operations respectively, with a significant reduced design effort and time.
Resumo:
The integration of an ever growing proportion of large scale distributed renewable generation has increased the probability of maloperation of the traditional RoCoF and vector shift relays. With reduced inertia due to non-synchronous penetration in a power grid, system wide disturbances have forced the utility industry to design advanced protection schemes to prevent system degradation and avoid cascading outages leading to widespread blackouts. This paper explores a novel adaptive nonlinear approach applied to islanding detection, based on wide area phase angle measurements. This is challenging, since the voltage phase angles from different locations exhibit not only strong nonlinear but also time-varying characteristics. The adaptive nonlinear technique, called moving window kernel principal component analysis is proposed to model the time-varying and nonlinear trends in the voltage phase angle data. The effectiveness of the technique is exemplified using both DigSilent simulated cases and real test cases recorded from the Great Britain and Ireland power systems by the OpenPMU project.
Resumo:
This paper proposes a probabilistic principal component analysis (PCA) approach applied to islanding detection study based on wide area PMU data. The increasing probability of uncontrolled islanding operation, according to many power system operators, is one of the biggest concerns with a large penetration of distributed renewable generation. The traditional islanding detection methods, such as RoCoF and vector shift, are however extremely sensitive and may result in many unwanted trips. The proposed probabilistic PCA aims to improve islanding detection accuracy and reduce the risk of unwanted tripping based on PMU measurements, while addressing a practical issue on missing data. The reliability and accuracy of the proposed probabilistic PCA approach are demonstrated using real data recorded in the UK power system by the OpenPMU project. The results show that the proposed methods can detect islanding accurately, without being falsely triggered by generation trips, even in the presence of missing values.
Resumo:
The risks associated with zoonotic infections transmitted by companion animals are a serious public health concern: the control of zoonoses incidence in domestic dogs, both owned and stray, is hence important to protect human health. Integrated dog population management (DPM) programs, based on the availability of information systems providing reliable data on the structure and composition of the existing dog population in a given area, are fundamental for making realistic plans for any disease surveillance and action system. Traceability systems, based on the compulsory electronic identification of dogs and their registration in a computerised database, are one of the most effective ways to ensure the usefulness of DPM programs. Even if this approach provides many advantages, several areas of improvement have emerged in countries where it has been applied. In Italy, every region hosts its own dog register but these are not compatible with one another. This paper shows the advantages of a web-based-application to improve data management of dog regional registers. The approach used for building this system was inspired by farm animal traceability schemes and it relies on a network of services that allows multi-channel access by different devices and data exchange via the web with other existing applications, without changing the pre-existing platforms. Today the system manages a database for over 300,000 dogs registered in three different Italian regions. By integrating multiple Web Services, this approach could be the solution to gather data at national and international levels at reasonable cost and creating a traceability system on a large scale and across borders that can be used for disease surveillance and development of population management plans. © 2012 Elsevier B.V.
Resumo:
Background
Neighbourhood segregation has been described as a fundamental determinant of physical health, but literature on its effect on mental health is less clear. Whilst most previous research has relied on conceptualized measures of segregation, Northern Ireland is unique as it contains physical manifestations of segregation in the form of segregation barriers (or “peacelines”) which can be used to accurately identify residential segregation.
Methods
We used population-wide health record data on over 1.3 million individuals, to analyse the effect of residential segregation, measured by both the formal Dissimilarity Index and by proximity to a segregation barrier, on the likelihood of poor mental health.
Results
Using multi-level logistic regression models we found residential segregation measured by the Dissimilarity Index poses no additional risk to the likelihood of poor mental health after adjustment for area-level deprivation. However, residence in an area segregated by a “peaceline” increases the likelihood of antidepressant medication by 19% (OR=1.19, 95% CI: 1.14, 1.23) and anxiolytic medication by 39% (OR=1.39, 95% CI: 1.32, 1.48), even after adjustment for gender, age, conurbation, deprivation and crime.
Conclusions
Living in an area segregated by a ‘peaceline’ is detrimental to mental health suggesting segregated areas characterised by a heightened sense of ‘other’ pose a greater risk to mental health. The difference in results based on segregation measure highlights the importance of choice of measure when studying segregation.
Resumo:
Lattice-based cryptography has gained credence recently as a replacement for current public-key cryptosystems, due to its quantum-resilience, versatility, and relatively low key sizes. To date, encryption based on the learning with errors (LWE) problem has only been investigated from an ideal lattice standpoint, due to its computation and size efficiencies. However, a thorough investigation of standard lattices in practice has yet to be considered. Standard lattices may be preferred to ideal lattices due to their stronger security assumptions and less restrictive parameter selection process. In this paper, an area-optimised hardware architecture of a standard lattice-based cryptographic scheme is proposed. The design is implemented on a FPGA and it is found that both encryption and decryption fit comfortably on a Spartan-6 FPGA. This is the first hardware architecture for standard lattice-based cryptography reported in the literature to date, and thus is a benchmark for future implementations.
Additionally, a revised discrete Gaussian sampler is proposed which is the fastest of its type to date, and also is the first to investigate the cost savings of implementing with lamda_2-bits of precision. Performance results are promising in comparison to the hardware designs of the equivalent ring-LWE scheme, which in addition to providing a stronger security proof; generate 1272 encryptions per second and 4395 decryptions per second.