406 resultados para Speech-processing technologies
Resumo:
Filtration membrane technology has already been employed to remove various organic effluents produced from the textile, paper, plastic, leather, food and mineral processing industries. To improve membrane efficiency and alleviate membrane fouling, an integrated approach is adopted that combines membrane filtration and photocatalysis technology. In this study, alumina nanofiber (AF) membranes with pore size of about 10 nm (determined by the liquid-liquid displacement method) have been synthesized through an in situ hydrothermal reaction, which permitted a large flux and achieved high selectivity. Silver nanoparticles (Ag NPs) are subsequently doped on the nanofibers of the membranes. Silver nanoparticles can strongly absorb visible light due to the surface plasmon resonance (SPR) effect, and thus induce photocatalytic degradation of organic dyes, including anionic, cationic and neutral dyes, under visible light irradiation. In this integrated system, the dyes are retained on the membrane surface, their concentration in the vicinity of the Ag NPs are high and thus can be efficiently decomposed. Meanwhile, the usual flux deterioration caused by the accumulation of the filtered dyes in the passage pores can be avoided. For example, when an aqueous solution containing methylene blue is processed using an integrated membrane, a large flux of 200 L m-2 h-1 and a stable permeating selectivity of 85% were achieved. The combined photocatalysis and filtration function leads to superior performance of the integrated membranes, which have a potential to be used for the removal of organic pollutants in drinking water.
Resumo:
This volume represents the proceedings of the 13th ENTER conference, held at Lausanne, Switzerland during 2006. The conference brought together academics and practitioners across four tracks, which were eSolutions, refereed research papers, work-in-progress papers, and a Ph.D workshop. This proceedings contains 40 refereed papers, which is less that the 51 papers presented in 2005. However, the editors advise the scientific committee was stricter than in previous years, to the extent that the acceptance rate was 50%. A significant change in the current proceedings is the inclusion of extended abstracts of the 23 work-in-progress presentations. The papers cover a diverse range of topics across 16 research streams. This reviewer has adopted the approach of succinctly summarising the contribution of each of the 40 refereed papers, in the order in which they appear...
Resumo:
This volume represents the proceedings of the 12th ENTER conference held at Innsbruck in 2005. While the conference also accepts work-in-progress papers and includes a Ph.D. workshop, the proceedings contain 51 research papers by 102 authors. The general theme of the conference was eBusiness is here—what is next? and the papers cover a diverse range of topics across nine tracks. This reviewer has adopted the approach of succinctly summarising the contribution of each of the papers, in the order they appear....
Resumo:
"This volume represents the proceedings of the 10th ENTER conference, held in Helsinki, Finland during January 2003. The conference theme was ‘technology on the move’, and the 476pp. proceedings offer 50 papers by 108 authors. The editors advise all papers were subject to a double blind peer review. The research has been categorised into 18 broad headings, which reflects the diversity of topics addressed. This reviewer has adopted the approach of succinctly summarising each of the papers, in the order they appear, to assist readers of Tourism Management in judging the potential value of the content for their own work..." -- publisher website
Resumo:
This summary is based on an international review of leading peer reviewed journals, in both technical and management fields. It draws on highly cited articles published between 2000 and 2009 to investigate the research question, "What are the diffusion determinants for passive building technologies in Australia?". Using a conceptual framework drawn from the innovation systems literature, this paper synthesises and interprets the literature to map the current state of passive building technologies in Australia and to analyse the drivers for, and obstacles to, their optimal diffusion. The paper concludes that the government has a key role to play through its influence over the specification of building codes.
Resumo:
Background When observers are asked to identify two targets in rapid sequence, they often suffer profound performance deficits for the second target, even when the spatial location of the targets is known. This attentional blink (AB) is usually attributed to the time required to process a previous target, implying that a link should exist between individual differences in information processing speed and the AB. Methodology/Principal Findings The present work investigated this question by examining the relationship between a rapid automatized naming task typically used to assess information-processing speed and the magnitude of the AB. The results indicated that faster processing actually resulted in a greater AB, but only when targets were presented amongst high similarity distractors. When target-distractor similarity was minimal, processing speed was unrelated to the AB. Conclusions/Significance Our findings indicate that information-processing speed is unrelated to target processing efficiency per se, but rather to individual differences in observers' ability to suppress distractors. This is consistent with evidence that individuals who are able to avoid distraction are more efficient at deploying temporal attention, but argues against a direct link between general processing speed and efficient information selection.
Resumo:
Compositionality is a frequently made assumption in linguistics, and yet many human subjects reveal highly non-compositional word associations when confronted with novel concept combinations. This article will show how a non-compositional account of concept combinations can be supplied by modelling them as interacting quantum systems.
Resumo:
The future vehicle navigation for safety applications requires seamless positioning at the accuracy of sub-meter or better. However, standalone Global Positioning System (GPS) or Differential GPS (DGPS) suffer from solution outages while being used in restricted areas such as high-rise urban areas and tunnels due to the blockages of satellite signals. Smoothed DGPS can provide sub-meter positioning accuracy, but not the seamless requirement. A disadvantage of the traditional navigation aids such as Dead Reckoning and Inertial Measurement Unit onboard vehicles are either not accurate enough due to error accumulation or too expensive to be acceptable by the mass market vehicle users. One of the alternative technologies is to use the wireless infrastructure installed in roadside to locate vehicles in regions where the Global Navigation Satellite Systems (GNSS) signals are not available (for example: inside tunnels, urban canyons and large indoor car parks). The examples of roadside infrastructure which can be potentially used for positioning purposes could include Wireless Local Area Network (WLAN)/Wireless Personal Area Network (WPAN) based positioning systems, Ultra-wide band (UWB) based positioning systems, Dedicated Short Range Communication (DSRC) devices, Locata’s positioning technology, and accurate road surface height information over selected road segments such as tunnels. This research reviews and compares the possible wireless technologies that could possibly be installed along roadside for positioning purposes. Models and algorithms of integrating different positioning technologies are also presented. Various simulation schemes are designed to examine the performance benefits of united GNSS and roadside infrastructure for vehicle positioning. The results from these experimental studies have shown a number of useful findings. It is clear that in the open road environment where sufficient satellite signals can be obtained, the roadside wireless measurements contribute very little to the improvement of positioning accuracy at the sub-meter level, especially in the dual constellation cases. In the restricted outdoor environments where only a few GPS satellites, such as those with 45 elevations, can be received, the roadside distance measurements can help improve both positioning accuracy and availability to the sub-meter level. When the vehicle is travelling in tunnels with known heights of tunnel surfaces and roadside distance measurements, the sub-meter horizontal positioning accuracy is also achievable. Overall, simulation results have demonstrated that roadside infrastructure indeed has the potential to provide sub-meter vehicle position solutions for certain road safety applications if the properly deployed roadside measurements are obtainable.
Resumo:
This paper introduces the Weighted Linear Discriminant Analysis (WLDA) technique, based upon the weighted pairwise Fisher criterion, for the purposes of improving i-vector speaker verification in the presence of high intersession variability. By taking advantage of the speaker discriminative information that is available in the distances between pairs of speakers clustered in the development i-vector space, the WLDA technique is shown to provide an improvement in speaker verification performance over traditional Linear Discriminant Analysis (LDA) approaches. A similar approach is also taken to extend the recently developed Source Normalised LDA (SNLDA) into Weighted SNLDA (WSNLDA) which, similarly, shows an improvement in speaker verification performance in both matched and mismatched enrolment/verification conditions. Based upon the results presented within this paper using the NIST 2008 Speaker Recognition Evaluation dataset, we believe that both WLDA and WSNLDA are viable as replacement techniques to improve the performance of LDA and SNLDA-based i-vector speaker verification.
Resumo:
The iPhone represents an important moment in both the short history of mobile media and the long history of cultural technologies. Like the Walkman of the 1980s, it marks a juncture in which notions about identity, individualism, lifestyle and sociality require rearticulation. This book explores not only the iPhone’s particular characteristics, uses and "affects," but also how the "iPhone moment" functions as a barometer for broader patterns of change. In the iPhone moment, this study considers the convergent trajectories in the evolution of digital and mobile culture, and their implications for future scholarship. Through the lens of the iPhone—as a symbol, culture and a set of material practices around contemporary convergent mobile media—the essays collected here explore the most productive theoretical and methodological approaches for grasping media practice, consumer culture and networked communication in the twenty-first century.
Resumo:
Using Gray and McNaughton’s (2000) revised Reinforcement Sensitivity Theory (r-RST), we examined the influence of personality on processing of words presented in gain-framed and loss-framed anti-speeding messages and how the processing biases associated with personality influenced message acceptance. The r-RST predicts that the nervous system regulates personality and that behaviour is dependent upon the activation of the Behavioural Activation System (BAS), activated by reward cues and the Fight-Flight-Freeze System (FFFS), activated by punishment cues. According to r-RST, individuals differ in the sensitivities of their BAS and FFFS (i.e., weak to strong), which in turn leads to stable patterns of behaviour in the presence of rewards and punishments, respectively. It was hypothesised that individual differences in personality (i.e., strength of the BAS and the FFFS) would influence the degree of both message processing (as measured by reaction time to previously viewed message words) and message acceptance (measured three ways by perceived message effectiveness, behavioural intentions, and attitudes). Specifically, it was anticipated that, individuals with a stronger BAS would process the words presented in the gain-frame messages faster than those with a weaker BAS and individuals with a stronger FFFS would process the words presented in the loss-frame messages faster than those with a weaker FFFS. Further, it was expected that greater processing (faster reaction times) would be associated with greater acceptance for that message. Driver licence holding students (N = 108) were recruited to view one of four anti-speeding messages (i.e., social gain-frame, social loss-frame, physical gain-frame, and physical loss-frame). A computerised lexical decision task assessed participants’ subsequent reaction times to message words, as an indicator of the extent of processing of the previously viewed message. Self-report measures assessed personality and the three message acceptance measures. As predicted, the degree of initial processing of the content of the social gain-framed message mediated the relationship between the reward sensitive trait and message effectiveness. Initial processing of the physical loss-framed message partially mediated the relationship between the punishment sensitive trait and both message effectiveness and behavioural intention ratings. These results show that reward sensitivity and punishment sensitivity traits influence cognitive processing of gain-framed and loss-framed message content, respectively, and subsequently, message effectiveness and behavioural intention ratings. Specifically, a range of road safety messages (i.e., gain-frame and loss-frame messages) could be designed which align with the processing biases associated with personality and which would target those individuals who are sensitive to rewards and those who are sensitive to punishments.
Resumo:
The question of under what conditions conceptual representation is compositional remains debatable within cognitive science. This paper proposes a well developed mathematical apparatus for a probabilistic representation of concepts, drawing upon methods developed in quantum theory to propose a formal test that can determine whether a specific conceptual combination is compositional, or not. This test examines a joint probability distribution modeling the combination, asking whether or not it is factorizable. Empirical studies indicate that some combinations should be considered non-compositionally.
Resumo:
Serving as a powerful tool for extracting localized variations in non-stationary signals, applications of wavelet transforms (WTs) in traffic engineering have been introduced; however, lacking in some important theoretical fundamentals. In particular, there is little guidance provided on selecting an appropriate WT across potential transport applications. This research described in this paper contributes uniquely to the literature by first describing a numerical experiment to demonstrate the shortcomings of commonly-used data processing techniques in traffic engineering (i.e., averaging, moving averaging, second-order difference, oblique cumulative curve, and short-time Fourier transform). It then mathematically describes WT’s ability to detect singularities in traffic data. Next, selecting a suitable WT for a particular research topic in traffic engineering is discussed in detail by objectively and quantitatively comparing candidate wavelets’ performances using a numerical experiment. Finally, based on several case studies using both loop detector data and vehicle trajectories, it is shown that selecting a suitable wavelet largely depends on the specific research topic, and that the Mexican hat wavelet generally gives a satisfactory performance in detecting singularities in traffic and vehicular data.
Resumo:
Biomass and non-food crop residues are seen as relatively low cost and abundant renewable sources capable of making a large contribution to the world’s future energy and chemicals supply. Signifi cant quantities of ethanol are currently produced from biomass via biochemical processes, but thermochemical conversion processes offer greater potential to utilize the entire biomass source to produce a range of products. This chapter will review thermochemical gasifi cation and pyrolysis methods with a focus on hydrothermal liquefaction processes. Hydrothermal liquefaction is the most energetically advantageous thermochemical biomass conversion process. If the target is to produce sustainable liquid fuels and chemicals and reduce the impact of global warming as a result of carbon dioxide, nitrous oxide, and methane emissions (i.e., protect the natural environment), the use of “green” solvents, biocatalysts and heterogeneous catalysts must be the main R&D initiatives. As the biocrude produced from hydrothermal liquefaction is a complex mixture which is relatively viscous, corrosive, and unstable to oxidation (due to the presence of water and oxygenated compounds), additional upgrading processes are required to produce suitable biofuels and chemicals.
Resumo:
On 1 January 2010, the Assisted Reproductive Treatment Act 2008 (Vic) came into force. The legislation was the outcome of a detailed review and consultation process undertaken by the Victorian Law Reform Commission. Arguably, the change to the regulatory framework represents a significant shift in policy compared to previous regulatory approaches on this topic in Victoria. This article considers the impact of the new legislation on eligibility for reproductive treatments, focusing on the accessibility of such services for the purpose of creating a “saviour sibling”. It also highlights the impact of the Victorian regulatory body’s decision to abolish its regulatory policies on preimplantation genetic diagnosis and preimplantation tissue-typing, concluding that the regulatory approach in relation to these latter issues is similar to other Australian jurisdictions where such practices are not addressed by a statutory framework.