19 resultados para one step estimation

em Cambridge University Engineering Department Publications Database


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent development of solution processable organic semiconductors delineates the emergence of a new generation of air-stable, high performance p- and n-type materials. This makes it indeed possible for printed organic complementary circuits (CMOS) to be used in real applications. The main technical bottleneck for organic CMOS to be adopted as the next generation organic integrated circuit is how to deposit and pattern both p- and n-type semiconductor materials with high resolutions at the same time. It represents a significant technical challenge, especially if it can be done for multiple layers without mask alignment. In this paper, we propose a one-step self-aligned fabrication process which allows the deposition and high resolution patterning of functional layers for both p- and n-channel thin film transistors (TFTs) simultaneously. All the dimensional information of the device components is featured on a single imprinting stamp, and the TFT-channel geometry, electrodes with different work functions, p- and n-type semiconductors and effective gate dimensions can all be accurately defined by one-step imprinting and the subsequent pattern transfer process. As an example, we have demonstrated an organic complementary inverter fabricated by 3D imprinting in combination with inkjet printing and the measured electrical characteristics have validated the feasibility of the novel technique. © 2012 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La0.7Ca0.3MnO3/Mn3O4 composites can be synthesized in one step by thermal treatment of a spray-dried precursor, instead of mixing pre-synthesized powders. Another advantage of this composite system is that a long sintering step can be used without leading to significant modification of the manganite composition. The percolation threshold is reached at ∼20 vol% of manganite phase. The 77 K low field magnetoresistance is enhanced to ∼11% at 0.15 T when the composition is close to the percolation threshold. © 2007 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

While the world is focused on controlling the spread of diseases such as HIV and malaria in the developing world, another approaching epidemic has been largely overlooked. The World Heath Organization predicts that there will be 16 million new cancer cases per year in 2020 and 70% of these will be in the developing world. Many of these cancers are preventable, or treatable when detected early enough. Establishing effective, affordable and workable cancer control plans in African countries is one step in the right direction toward limiting this epidemic.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

At the first international Visualization Summit, more than 100 international researchers and practitioners defined and assessed nine original and important research goals in the context of Visualization Science, and proposed methods for achieving these goals by 2010. The synthesis of the whole event is presented in the 10th research goal. This article contributes a building block for systemizing visualization research by proposing mutually elaborated research goals with defined milestones. Such a consensus on where to go together is only one step toward establishing visualization science in the long-term perspective as a discipline with comparable relevance to chemistry, mathematics, language, or history. First, this article introduces the conference setting. Second, it describes the research goals and findings from the nine workshops. Third, a survey among 62 participants about the originality and importance of each research goal is presented and discussed. Finally, the article presents a synthesis of the nine research goals in the form of a 10th research goal, namely Visualizing Future Cities. The article is relevant for visualization researchers, trend scouts, research programme directors who define the topics that get funds. © 2007 Palgr aveMacmillan Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A process to fabricate solution-processable thin-film transistors (TFTs) with a one-step self-aligned definition of the dimensions in all functional layers is demonstrated. The TFT-channel, semiconductor materials, and effective gate dimention of different layers are determined by a one-step imprint process and the subsequent pattern transfer without the need for multiple patterning and mask alignment. The process is compatible with fabrication of large-scale circuits. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the Climate Change Act of 2008 the UK Government pledged to reduce carbon emissions by 80% by 2050. As one step towards this, regulations are being introduced requiring all new buildings to be ‘zero carbon’ by 2019. These are defined as buildings which emit net zero carbon during their operational lifetime. However, in order to meet the 80% target it is necessary to reduce the carbon emitted during the whole life-cycle of buildings, including that emitted during the processes of construction. These elements make up the ‘embodied carbon’ of the building. While there are no regulations yet in place to restrict embodied carbon, a number of different approaches have been made. There are several existing databases of embodied carbon and embodied energy. Most provide data for the material extraction and manufacturing only, the ‘cradle to factory gate’ phase. In addition to the databases, various software tools have been developed to calculate embodied energy and carbon of individual buildings. A third source of data comes from the research literature, in which individual life cycle analyses of buildings are reported. This paper provides a comprehensive review, comparing and assessing data sources, boundaries and methodologies. The paper concludes that the wide variations in these aspects produce incomparable results. It highlights the areas where existing data is reliable, and where new data and more precise methods are needed. This comprehensive review will guide the future development of a consistent and transparent database and software tool to calculate the embodied energy and carbon of buildings.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the Climate Change Act of 2008 the UK Government pledged to reduce carbon emissions by 80% by 2050. As one step towards this, regulations are being introduced requiring all new buildings to be ‘zero carbon’ by 2019. These are defined as buildingswhichemitnetzerocarbonduringtheiroperationallifetime.However,inordertomeetthe80%targetitisnecessary to reduce the carbon emitted during the whole life-cycle of buildings, including that emitted during the processes of construction. These elements make up the ‘embodied carbon’ of the building. While there are no regulations yet in place to restrictembodiedcarbon,anumberofdifferentapproacheshavebeenmade.Thereareseveralexistingdatabasesofembodied carbonandembodiedenergy.Mostprovidedataforthematerialextractionandmanufacturingonly,the‘cradletofactorygate’ phase. In addition to the databases, various software tools have been developed to calculate embodied energy and carbon of individual buildings. A third source of data comes from the research literature, in which individual life cycle analyses of buildings are reported. This paper provides a comprehensive review, comparing and assessing data sources, boundaries and methodologies. The paper concludes that the wide variations in these aspects produce incomparable results. It highlights the areas where existing data is reliable, and where new data and more precise methods are needed. This comprehensive review will guide the future development of a consistent and transparent database and software tool to calculate the embodied energy and carbon of buildings.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In order to understand how unburned hydrocarbons emerge from SI engines and, in particular, how non-fuel hydrocarbons are formed and oxidized, a new gas sampling technique has been developed. A sampling unit, based on a combination of techniques used in the Fast Flame Ionization Detector (FFID) and wall-mounted sampling valves, was designed and built to capture a sample of exhaust gas during a specific period of the exhaust process and from a specific location within the exhaust port. The sampling unit consists of a transfer tube with one end in the exhaust port and the other connected to a three-way valve that leads, on one side, to a FFID and, on the other, to a vacuum chamber with a high-speed solenoid valve. Exhaust gas, drawn by the pressure drop into the vacuum chamber, impinges on the face of the solenoid valve and flows radially outward. Once per cycle during a specified crank angle interval, the solenoid valve opens and traps exhaust gas in a storage unit, from which gas chromatography (GC) measurements are made. The port end of the transfer tube can be moved to different locations longitudinally or radially, thus allowing spatial resolution and capturing any concentration differences between port walls and the center of the flow stream. Further, the solenoid valve's opening and closing times can be adjusted to allow sampling over a window as small as 0.6 ms during any portion of the cycle, allowing resolution of a crank angle interval as small as 15°CA. Cycle averaged total HC concentration measured by the FFID and that measured by the sampling unit are in good agreement, while the sampling unit goes one step further than the FFID by providing species concentrations. Comparison with previous measurements using wall-mounted sampling valves suggests that this sampling unit is fully capable of providing species concentration information as a function of air/fuel ratio, load, and engine speed at specific crank angles. © Copyright 1996 Society of Automotive Engineers, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sequential Monte Carlo (SMC) methods are popular computational tools for Bayesian inference in non-linear non-Gaussian state-space models. For this class of models, we propose SMC algorithms to compute the score vector and observed information matrix recursively in time. We propose two different SMC implementations, one with computational complexity $\mathcal{O}(N)$ and the other with complexity $\mathcal{O}(N^{2})$ where $N$ is the number of importance sampling draws. Although cheaper, the performance of the $\mathcal{O}(N)$ method degrades quickly in time as it inherently relies on the SMC approximation of a sequence of probability distributions whose dimension is increasing linearly with time. In particular, even under strong \textit{mixing} assumptions, the variance of the estimates computed with the $\mathcal{O}(N)$ method increases at least quadratically in time. The $\mathcal{O}(N^{2})$ is a non-standard SMC implementation that does not suffer from this rapid degrade. We then show how both methods can be used to perform batch and recursive parameter estimation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years there has been a growing interest amongst the speech research community into the use of spectral estimators which circumvent the traditional quasi-stationary assumption and provide greater time-frequency (t-f) resolution than conventional spectral estimators, such as the short time Fourier power spectrum (STFPS). One distribution in particular, the Wigner distribution (WD), has attracted considerable interest. However, experimental studies have indicated that, despite its improved t-f resolution, employing the WD as the front end of speech recognition system actually reduces recognition performance; only by explicitly re-introducing t-f smoothing into the WD are recognition rates improved. In this paper we provide an explanation for these findings. By treating the spectral estimation problem as one of optimization of a bias variance trade off, we show why additional t-f smoothing improves recognition rates, despite reducing the t-f resolution of the spectral estimator. A practical adaptive smoothing algorithm is presented, whicy attempts to match the degree of smoothing introduced into the WD with the time varying quasi-stationary regions within the speech waveform. The recognition performance of the resulting adaptively smoothed estimator is found to be comparable to that of conventional filterbank estimators, yet the average temporal sampling rate of the resulting spectral vectors is reduced by around a factor of 10. © 1992.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human locomotion is known to be influenced by observation of another person's gait. For example, athletes often synchronize their step in long distance races. However, how interaction with a virtual runner affects the gait of a real runner has not been studied. We investigated this by creating an illusion of running behind a virtual model (VM) using a treadmill and large screen virtual environment showing a video of a VM. We looked at step synchronization between the real and virtual runner and at the role of the step frequency (SF) in the real runner's perception of VM speed. We found that subjects match VM SF when asked to match VM speed with their own (Figure 1). This indicates step synchronization may be a strategy of speed matching or speed perception. Subjects chose higher speeds when VMSF was higher (though VM was 12km/h in all videos). This effect was more pronounced when the speed estimate was rated verbally while standing still. (Figure 2). This may due to correlated physical activity affecting the perception of VM speed [Jacobs et al. 2005]; or step synchronization altering the subjects' perception of self speed [Durgin et al. 2007]. Our findings indicate that third person activity in a collaborative virtual locomotive environment can have a pronounced effect on an observer's gait activity and their perceptual judgments of the activity of others: the SF of others (virtual or real) can potentially influence one's perception of self speed and lead to changes in speed and SF. A better understanding of the underlying mechanisms would support the design of more compelling virtual trainers and may be instructive for competitive athletics in the real world. © 2009 ACM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Model based compensation schemes are a powerful approach for noise robust speech recognition. Recently there have been a number of investigations into adaptive training, and estimating the noise models used for model adaptation. This paper examines the use of EM-based schemes for both canonical models and noise estimation, including discriminative adaptive training. One issue that arises when estimating the noise model is a mismatch between the noise estimation approximation and final model compensation scheme. This paper proposes FA-style compensation where this mismatch is eliminated, though at the expense of a sensitivity to the initial noise estimates. EM-based discriminative adaptive training is evaluated on in-car and Aurora4 tasks. FA-style compensation is then evaluated in an incremental mode on the in-car task. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Novel statistical models are proposed and developed in this paper for automated multiple-pitch estimation problems. Point estimates of the parameters of partial frequencies of a musical note are modeled as realizations from a non-homogeneous Poisson process defined on the frequency axis. When several notes are combined, the processes for the individual notes combine to give a new Poisson process whose likelihood is easy to compute. This model avoids the data-association step of linking the harmonics of each note with the corresponding partials and is ideal for efficient Bayesian inference of unknown multiple fundamental frequencies in a signal. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Camera motion estimation is one of the most significant steps for structure-from-motion (SFM) with a monocular camera. The normalized 8-point, the 7-point, and the 5-point algorithms are normally adopted to perform the estimation, each of which has distinct performance characteristics. Given unique needs and challenges associated to civil infrastructure SFM scenarios, selection of the proper algorithm directly impacts the structure reconstruction results. In this paper, a comparison study of the aforementioned algorithms is conducted to identify the most suitable algorithm, in terms of accuracy and reliability, for reconstructing civil infrastructure. The free variables tested are baseline, depth, and motion. A concrete girder bridge was selected as the "test-bed" to reconstruct using an off-the-shelf camera capturing imagery from all possible positions that maximally the bridge's features and geometry. The feature points in the images were extracted and matched via the SURF descriptor. Finally, camera motions are estimated based on the corresponding image points by applying the aforementioned algorithms, and the results evaluated.