33 resultados para one-time passwords
Resumo:
Recent development of solution processable organic semiconductors delineates the emergence of a new generation of air-stable, high performance p- and n-type materials. This makes it indeed possible for printed organic complementary circuits (CMOS) to be used in real applications. The main technical bottleneck for organic CMOS to be adopted as the next generation organic integrated circuit is how to deposit and pattern both p- and n-type semiconductor materials with high resolutions at the same time. It represents a significant technical challenge, especially if it can be done for multiple layers without mask alignment. In this paper, we propose a one-step self-aligned fabrication process which allows the deposition and high resolution patterning of functional layers for both p- and n-channel thin film transistors (TFTs) simultaneously. All the dimensional information of the device components is featured on a single imprinting stamp, and the TFT-channel geometry, electrodes with different work functions, p- and n-type semiconductors and effective gate dimensions can all be accurately defined by one-step imprinting and the subsequent pattern transfer process. As an example, we have demonstrated an organic complementary inverter fabricated by 3D imprinting in combination with inkjet printing and the measured electrical characteristics have validated the feasibility of the novel technique. © 2012 Elsevier B.V. All rights reserved.
Resumo:
Real-time cardiac ultrasound allows monitoring the heart motion during intracardiac beating heart procedures. Our application assists atrial septal defect (ASD) closure techniques using real-time 3D ultrasound guidance. One major image processing challenge is the processing of information at high frame rate. We present an optimized block flow technique, which combines the probability-based velocity computation for an entire block with template matching. We propose adapted similarity constraints both from frame to frame, to conserve energy, and globally, to minimize errors. We show tracking results on eight in-vivo 4D datasets acquired from porcine beating-heart procedures. Computing velocity at the block level with an optimized scheme, our technique tracks ASD motion at 41 frames/s. We analyze the errors of motion estimation and retrieve the cardiac cycle in ungated images. © 2007 IEEE.
Resumo:
Ubiquitous in-building Real Time Location Systems (RTLS) today are limited by costly active radio frequency identification (RFID) tags and short range portal readers of low cost passive RFID tags. We, however, present a novel technology locates RFID tags using a new approach based on (a) minimising RFID fading using antenna diversity, frequency dithering, phase dithering and narrow beam-width antennas, (b) measuring a combination of RSSI and phase shift in the coherent received tag backscatter signals and (c) being selective of use of information from the system by, applying weighting techniques to minimise error. These techniques make it possible to locate tags to an accuracy of less than one metre. This breakthrough will enable, for the first time, the low-cost tagging of items and the possibility of locating them at relatively high precision.
Resumo:
Information and Communication Technology (ICT) is becoming increasingly central to many people’s lives, making it possible to be connected in any place at any time, be unceasingly and instantly informed, and benefit from greater economic and educational opportunities. With all the benefits afforded by these new-found capabilities, however, come potential drawbacks. A plethora of new PCs, laptops, tablets, smartphones, Bluetooth, the internet, Wi-Fi (the list goes on) expect us to know or be able to guess, what, where and when to connect, click, double-click, tap, flick, scroll, in order to realise these benefits, and to have the physical and cognitive capability to do all these things. One of the groups most affected by this increase in high-demand technology is older people. They do not understand and use technology in the same way that younger generations do, because they grew up in the simpler electro-mechanical era and embedded that particular model of the world in their minds. Any consequential difficulty in familiarising themselves with modern ICT and effectively applying it to their needs can also be exacerbated by age-related changes in vision, motor control and cognitive functioning. Such challenges lead to digital exclusion. Much has been written about this topic over the years, usually by academics from the area of inclusive product design. The issue is complex and it is fair to say that no one researcher has the whole picture. It is difficult to understand and adequately address the issue of digital exclusion among the older generation without looking across disciplines and at industry’s and government’s understanding, motivation and efforts toward resolving this important problem. To do otherwise is to risk misunderstanding the true impact that ICT has and could have on people’s lives across all generations. In this European year of Active Ageing and Solidarity between Generations and as the British government is moving forward with its Digital by Default initiative as part of a wider objective to make ICT accessible to as many people as possible by 2015, the Engineering Design Centre (EDC) at the University of Cambridge collaborated with BT to produce a book of thought pieces to address, and where appropriate redress, these important and long-standing issues. “Ageing, Adaption and Accessibility: Time for the Inclusive Revolution!” brings together opinions and insights from twenty one prominent thought leaders from government, industry and academia regarding the problems, opportunities and strategies for combating digital exclusion among senior citizens. The contributing experts were selected as individuals, rather than representatives of organisations, to provide the broadest possible range of perspectives. They are renowned in their respective fields and their opinions are formed not only from their own work, but also from the contributions of others in their area. Their views were elicited through conversations conducted by the editors of this book who then drafted the thought pieces to be edited and approved by the experts. We hope that this unique collection of thought pieces will give you a broader perspective on ageing, people’s adaption to the ever changing world of technology and insights into better ways of designing digital devices and services for the older population.
Resumo:
This paper presents an efficient algorithm for robust network reconstruction of Linear Time-Invariant (LTI) systems in the presence of noise, estimation errors and unmodelled nonlinearities. The method here builds on previous work [1] on robust reconstruction to provide a practical implementation with polynomial computational complexity. Following the same experimental protocol, the algorithm obtains a set of structurally-related candidate solutions spanning every level of sparsity. We prove the existence of a magnitude bound on the noise, which if satisfied, guarantees that one of these structures is the correct solution. A problem-specific model-selection procedure then selects a single solution from this set and provides a measure of confidence in that solution. Extensive simulations quantify the expected performance for different levels of noise and show that significantly more noise can be tolerated in comparison to the original method. © 2012 IEEE.
Resumo:
The fundamental aim of clustering algorithms is to partition data points. We consider tasks where the discovered partition is allowed to vary with some covariate such as space or time. One approach would be to use fragmentation-coagulation processes, but these, being Markov processes, are restricted to linear or tree structured covariate spaces. We define a partition-valued process on an arbitrary covariate space using Gaussian processes. We use the process to construct a multitask clustering model which partitions datapoints in a similar way across multiple data sources, and a time series model of network data which allows cluster assignments to vary over time. We describe sampling algorithms for inference and apply our method to defining cancer subtypes based on different types of cellular characteristics, finding regulatory modules from gene expression data from multiple human populations, and discovering time varying community structure in a social network.
Resumo:
We live in an era of abundant data. This has necessitated the development of new and innovative statistical algorithms to get the most from experimental data. For example, faster algorithms make practical the analysis of larger genomic data sets, allowing us to extend the utility of cutting-edge statistical methods. We present a randomised algorithm that accelerates the clustering of time series data using the Bayesian Hierarchical Clustering (BHC) statistical method. BHC is a general method for clustering any discretely sampled time series data. In this paper we focus on a particular application to microarray gene expression data. We define and analyse the randomised algorithm, before presenting results on both synthetic and real biological data sets. We show that the randomised algorithm leads to substantial gains in speed with minimal loss in clustering quality. The randomised time series BHC algorithm is available as part of the R package BHC, which is available for download from Bioconductor (version 2.10 and above) via http://bioconductor.org/packages/2.10/bioc/html/BHC.html. We have also made available a set of R scripts which can be used to reproduce the analyses carried out in this paper. These are available from the following URL. https://sites.google.com/site/randomisedbhc/.
Resumo:
The measured time-history of the cylinder pressure is the principal diagnostic in the analysis of processes within the combustion chamber. This paper defines, implements and tests a pressure analysis algorithm for a Formula One racing engine in MATLAB1. Evaluation of the software on real data is presented. The sensitivity of the model to the variability of burn parameter estimates is also discussed. Copyright © 1997 Society of Automotive Engineers, Inc.
Resumo:
Animals repeat rewarded behaviors, but the physiological basis of reward-based learning has only been partially elucidated. On one hand, experimental evidence shows that the neuromodulator dopamine carries information about rewards and affects synaptic plasticity. On the other hand, the theory of reinforcement learning provides a framework for reward-based learning. Recent models of reward-modulated spike-timing-dependent plasticity have made first steps towards bridging the gap between the two approaches, but faced two problems. First, reinforcement learning is typically formulated in a discrete framework, ill-adapted to the description of natural situations. Second, biologically plausible models of reward-modulated spike-timing-dependent plasticity require precise calculation of the reward prediction error, yet it remains to be shown how this can be computed by neurons. Here we propose a solution to these problems by extending the continuous temporal difference (TD) learning of Doya (2000) to the case of spiking neurons in an actor-critic network operating in continuous time, and with continuous state and action representations. In our model, the critic learns to predict expected future rewards in real time. Its activity, together with actual rewards, conditions the delivery of a neuromodulatory TD signal to itself and to the actor, which is responsible for action choice. In simulations, we show that such an architecture can solve a Morris water-maze-like navigation task, in a number of trials consistent with reported animal performance. We also use our model to solve the acrobot and the cartpole problems, two complex motor control tasks. Our model provides a plausible way of computing reward prediction error in the brain. Moreover, the analytically derived learning rule is consistent with experimental evidence for dopamine-modulated spike-timing-dependent plasticity.
Resumo:
In order to understand how unburned hydrocarbons emerge from SI engines and, in particular, how non-fuel hydrocarbons are formed and oxidized, a new gas sampling technique has been developed. A sampling unit, based on a combination of techniques used in the Fast Flame Ionization Detector (FFID) and wall-mounted sampling valves, was designed and built to capture a sample of exhaust gas during a specific period of the exhaust process and from a specific location within the exhaust port. The sampling unit consists of a transfer tube with one end in the exhaust port and the other connected to a three-way valve that leads, on one side, to a FFID and, on the other, to a vacuum chamber with a high-speed solenoid valve. Exhaust gas, drawn by the pressure drop into the vacuum chamber, impinges on the face of the solenoid valve and flows radially outward. Once per cycle during a specified crank angle interval, the solenoid valve opens and traps exhaust gas in a storage unit, from which gas chromatography (GC) measurements are made. The port end of the transfer tube can be moved to different locations longitudinally or radially, thus allowing spatial resolution and capturing any concentration differences between port walls and the center of the flow stream. Further, the solenoid valve's opening and closing times can be adjusted to allow sampling over a window as small as 0.6 ms during any portion of the cycle, allowing resolution of a crank angle interval as small as 15°CA. Cycle averaged total HC concentration measured by the FFID and that measured by the sampling unit are in good agreement, while the sampling unit goes one step further than the FFID by providing species concentrations. Comparison with previous measurements using wall-mounted sampling valves suggests that this sampling unit is fully capable of providing species concentration information as a function of air/fuel ratio, load, and engine speed at specific crank angles. © Copyright 1996 Society of Automotive Engineers, Inc.
Resumo:
The objective of the research conducted by the authors is to explore the feasibility of determining reliable in situ values of shear modulus as a function of strain. In this paper the meaning of the material stiffness obtained from impact and harmonic excitation tests on a surface slab is discussed. A one-dimensional discrete model with the nonlinear material stiffness is used for this purpose. When a static load is applied followed by an impact excitation, if the amplitude of the impact is very small, the measured wave velocity using the cross-correlation indicates the wave velocity calculated from the tangent modulus corresponding to the state of stress caused by the applied static load. The duration of the impact affects the magnitude of the displacement and the particle velocity but has very little effect on the estimation of the wave velocity for the magnitudes considered herein. When a harmonic excitation is applied, the cross-correlation of the time histories at different depths estimates a wave velocity close to the one calculated from the secant modulus in the stress-strain loop under steady-state condition. Copyright © 2008 John Wiley & Sons, Ltd.
Resumo:
We solve the problem of steering a three-level quantum system from one eigen-state to another in minimum time and study its possible extension to the time-optimal control problem for a general n-level quantum system. For the three-level system we find all optimal controls by finding two types of symmetry in the problem: ℤ2 × S3 discrete symmetry and S1 continuous symmetry, and exploiting them to solve the problem through discrete reduction and symplectic reduction. We then study the geometry, in the same framework, which occurs in the time-optimal control of a general n-level quantum system. © 2007 IEEE.
Resumo:
We solve the problem of steering a three-level quantum system from one eigen-state to another in minimum time and study its possible extension to the time-optimal control problem for a general n-level quantum system. For the three-level system we find all optimal controls by finding two types of symmetry in the problems: ℤ × S3 discrete symmetry and 51 continuous symmetry, and exploiting them to solve the problem through discrete reduction and symplectic reduction. We then study the geometry, in the same framework, which occurs in the time-optimal control of a general n-level quantum system. Copyright ©2007 Watam Press.
Resumo:
Coupled Monte Carlo depletion systems provide a versatile and an accurate tool for analyzing advanced thermal and fast reactor designs for a variety of fuel compositions and geometries. The main drawback of Monte Carlo-based systems is a long calculation time imposing significant restrictions on the complexity and amount of design-oriented calculations. This paper presents an alternative approach to interfacing the Monte Carlo and depletion modules aimed at addressing this problem. The main idea is to calculate the one-group cross sections for all relevant isotopes required by the depletion module in a separate module external to Monte Carlo calculations. Thus, the Monte Carlo module will produce the criticality and neutron spectrum only, without tallying of the individual isotope reaction rates. The onegroup cross section for all isotopes will be generated in a separate module by collapsing a universal multigroup (MG) cross-section library using the Monte Carlo calculated flux. Here, the term "universal" means that a single MG cross-section set will be applicable for all reactor systems and is independent of reactor characteristics such as a neutron spectrum; fuel composition; and fuel cell, assembly, and core geometries. This approach was originally proposed by Haeck et al. and implemented in the ALEPH code. Implementation of the proposed approach to Monte Carlo burnup interfacing was carried out through the BGCORE system. One-group cross sections generated by the BGCORE system were compared with those tallied directly by the MCNP code. Analysis of this comparison was carried out and led to the conclusion that in order to achieve the accuracy required for a reliable core and fuel cycle analysis, accounting for the background cross section (σ0) in the unresolved resonance energy region is essential. An extension of the one-group cross-section generation model was implemented and tested by tabulating and interpolating by a simplified σ0 model. A significant improvement of the one-group cross-section accuracy was demonstrated.
Resumo:
The problem of calculating the minimum lap or maneuver time of a nonlinear vehicle, which is linearized at each time step, is formulated as a convex optimization problem. The formulation provides an alternative to previously used quasi-steady-state analysis or nonlinear optimization. Key steps are: the use of model predictive control; expressing the minimum time problem as one of maximizing distance traveled along the track centerline; and linearizing the track and vehicle trajectories by expressing them as small displacements from a fixed reference. A consequence of linearizing the vehicle dynamics is that nonoptimal steering control action can be generated, but attention to the constraints and the cost function minimizes the effect. Optimal control actions and vehicle responses for a 90 deg bend are presented and compared to the nonconvex nonlinear programming solution. Copyright © 2013 by ASME.