955 resultados para Quadratic, sieve, CUDA, OpenMP, SOC, Tegrak1
Resumo:
Some initial EUVL patterning results for polycarbonate based non-chemically amplified resists are presented. Without full optimization the developer a resolution of 60 nm line spaces could be obtained. With slight overexposure (1.4 × E0) 43.5 nm lines at a half pitch of 50 nm could be printed. At 2x E0 a 28.6 nm lines at a half pitch of 50 nm could be obtained with a LER that was just above expected for mask roughness. Upon being irradiated with EUV photons, these polymers undergo chain scission with the loss of carbon dioxide and carbon monoxide. The remaining photoproducts appear to be non-volatile under standard EUV irradiation conditions, but do exhibit increased solubility in developer compared to the unirradiated polymer. The sensitivity of the polymers to EUV light is related to their oxygen content and ways to increase the sensitivity of the polymers to 10 mJ cm-2 is discussed.
Resumo:
Three strategies for approaching the design and synthesis of non-chemically amplified resists (non-CARs) are presented. These are linear polycarbonates, star polyester-blk-poly(methyl methacrylate) and comb polymers with polysulfone backbones. The linear polycarbonates were designed to cleave when irradiated with 92 eV photons and high Tg alicyclic groups were incorporated into the backbone to increase Tg and etch resistance. The star block copolymers were designed to have a core that is sensitive to 92 eV photons and arms that have the potential to provide properties such as high Tg and etch resistance. Similarly the polysulfone comb polymers were designed to have an easily degradable polymer backbone and comb-arms that impart favorable physical properties. Initial patterning results are presented for a number of the systems.
Resumo:
The increasing growth in the use of Hardware Security Modules (HSMs) towards identification and authentication of a security endpoint have raised numerous privacy and security concerns. HSMs have the ability to tie a system or an object, along with its users to the physical world. However, this enables tracking of the user and/or an object associated with the HSM. Current systems do not adequately address the privacy needs and as such are susceptible to various attacks. In this work, we analyse various security and privacy concerns that arise when deploying such hardware security modules and propose a system that allow users to create pseudonyms from a trusted master public-secret key pair. The proposed system is based on the intractability of factoring and finding square roots of a quadratic residue modulo a composite number, where the composite number is a product of two large primes. Along with the standard notion of protecting privacy of an user, the proposed system offers colligation between seemingly independent pseudonyms. This new property when combined with HSMs that store the master secret key is extremely beneficial to a user, as it offers a convenient way to generate a large number of pseudonyms using relatively small storage requirements.
Resumo:
A pseudonym provides anonymity by protecting the identity of a legitimate user. A user with a pseudonym can interact with an unknown entity and be confident that his/her identity is secret even if the other entity is dishonest. In this work, we present a system that allows users to create pseudonyms from a trusted master public-secret key pair. The proposed system is based on the intractability of factoring and finding square roots of a quadratic residue modulo a composite number, where the composite number is a product of two large primes. Our proposal is different from previously published pseudonym systems, as in addition to standard notion of protecting privacy of an user, our system offers colligation between seemingly independent pseudonyms. This new property when combined with a trusted platform that stores a master secret key is extremely beneficial to an user as it offers a convenient way to generate a large number of pseudonyms using relatively small storage.
Resumo:
In Chapters 1 through 9 of the book (with the exception of a brief discussion on observers and integral action in Section 5.5 of Chapter 5) we considered constrained optimal control problems for systems without uncertainty, that is, with no unmodelled dynamics or disturbances, and where the full state was available for measurement. More realistically, however, it is necessary to consider control problems for systems with uncertainty. This chapter addresses some of the issues that arise in this situation. As in Chapter 9, we adopt a stochastic description of uncertainty, which associates probability distributions to the uncertain elements, that is, disturbances and initial conditions. (See Section 12.6 for references to alternative approaches to model uncertainty.) When incomplete state information exists, a popular observer-based control strategy in the presence of stochastic disturbances is to use the certainty equivalence [CE] principle, introduced in Section 5.5 of Chapter 5 for deterministic systems. In the stochastic framework, CE consists of estimating the state and then using these estimates as if they were the true state in the control law that results if the problem were formulated as a deterministic problem (that is, without uncertainty). This strategy is motivated by the unconstrained problem with a quadratic objective function, for which CE is indeed the optimal solution (˚Astr¨om 1970, Bertsekas 1976). One of the aims of this chapter is to explore the issues that arise from the use of CE in RHC in the presence of constraints. We then turn to the obvious question about the optimality of the CE principle. We show that CE is, indeed, not optimal in general. We also analyse the possibility of obtaining truly optimal solutions for single input linear systems with input constraints and uncertainty related to output feedback and stochastic disturbances.We first find the optimal solution for the case of horizon N = 1, and then we indicate the complications that arise in the case of horizon N = 2. Our conclusion is that, for the case of linear constrained systems, the extra effort involved in the optimal feedback policy is probably not justified in practice. Indeed, we show by example that CE can give near optimal performance. We thus advocate this approach in real applications.
Resumo:
Ozone-induced dissociation (OzID) exploits the gas-phase reaction between mass-selected lipid ions and ozone vapor to determine the position(s) of unsaturation In this contribution, we describe the modification of a tandem linear ion-trap mass spectrometer specifically for OzID analyses wherein ozone vapor is supplied to the collision cell This instrumental configuration provides spatial separation between mass-selection, the ozonolysis reaction, and mass-analysis steps in the OzID process and thus delivers significant enhancements in speed and sensitivity (ca 30-fold) These improvements allow spectra revealing the double-bond position(s) within unsaturated lipids to be acquired within 1 s significantly enhancing the utility of OzID in high-throughput lipidomic protocols The stable ozone concentration afforded by this modified instrument also allows direct comparison of relative reactivity of isomeric lipids and reveals reactivity trends related to (1) double-bond position, (2) substitution position on the glycerol backbone, and (3) stereochemistry For cis- and trans-isomers, differences were also observed in the branching ratio of product ions arising from the gas-phase ozonolysis reaction, suggesting that relative ion abundances could be exploited as markers for double-bond geometry Additional activation energy applied to mass-selected lipid ions during injection into the collision cell (with ozone present) was found to yield spectra containing both OzID and classical-CID fragment ions This combination CID-OzID acquisition on an ostensibly simple monounsaturated phosphatidylcholine within a cow brain lipid extract provided evidence for up to four structurally distinct phospholipids differing in both double-bond position and sn-substitution U Am Soc Mass Spectrom 2010, 21, 1989-1999) (C) 2010 American Society for Mass Spectrometry
Resumo:
We examine some variations of standard probability designs that preferentially sample sites based on how easy they are to access. Preferential sampling designs deliver unbiased estimates of mean and sampling variance and will ease the burden of data collection but at what cost to our design efficiency? Preferential sampling has the potential to either increase or decrease sampling variance depending on the application. We carry out a simulation study to gauge what effect it will have when sampling Soil Organic Carbon (SOC) values in a large agricultural region in south-eastern Australia. Preferential sampling in this region can reduce the distance to travel by up to 16%. Our study is based on a dataset of predicted SOC values produced from a datamining exercise. We consider three designs and two ways to determine ease of access. The overall conclusion is that sampling performance deteriorates as the strength of preferential sampling increases, due to the fact the regions of high SOC are harder to access. So our designs are inadvertently targeting regions of low SOC value. The good news, however, is that Generalised Random Tessellation Stratification (GRTS) sampling designs are not as badly affected as others and GRTS remains an efficient design compared to competitors.
Resumo:
Digital signatures are often used by trusted authorities to make unique bindings between a subject and a digital object; for example, certificate authorities certify a public key belongs to a domain name, and time-stamping authorities certify that a certain piece of information existed at a certain time. Traditional digital signature schemes however impose no uniqueness conditions, so a trusted authority could make multiple certifications for the same subject but different objects, be it intentionally, by accident, or following a (legal or illegal) coercion. We propose the notion of a double-authentication-preventing signature, in which a value to be signed is split into two parts: a subject and a message. If a signer ever signs two different messages for the same subject, enough information is revealed to allow anyone to compute valid signatures on behalf of the signer. This double-signature forgeability property discourages signers from misbehaving---a form of self-enforcement---and would give binding authorities like CAs some cryptographic arguments to resist legal coercion. We give a generic construction using a new type of trapdoor functions with extractability properties, which we show can be instantiated using the group of sign-agnostic quadratic residues modulo a Blum integer.
Resumo:
We present a distinguishing attack against SOBER-128 with linear masking. We found a linear approximation which has a bias of 2^− − 8.8 for the non-linear filter. The attack applies the observation made by Ekdahl and Johansson that there is a sequence of clocks for which the linear combination of some states vanishes. This linear dependency allows that the linear masking method can be applied. We also show that the bias of the distinguisher can be improved (or estimated more precisely) by considering quadratic terms of the approximation. The probability bias of the quadratic approximation used in the distinguisher is estimated to be equal to O(2^− − 51.8), so that we claim that SOBER-128 is distinguishable from truly random cipher by observing O(2^103.6) keystream words.
Resumo:
Purpose This study explores recent claims that humans exhibit a minimum cost of transport (CoTmin) for running which occurs at an intermediate speed, and assesses individual physiological, gait and training characteristics. Methods Twelve healthy participants with varying levels of fitness and running experience ran on a treadmill at six self-selected speeds in a discontinuous protocol over three sessions. Running speed (km[middle dot]hr-1), V[spacing dot above]O2 (mL[middle dot]kg-1[middle dot]km-1), CoT (kcal[middle dot]km-1), heart rate (beats[middle dot]min-1) and cadence (steps[middle dot]min-1) were continuously measured. V[spacing dot above]O2 max was measured on a fourth testing session. The occurrence of a CoTmin was investigated and its presence or absence examined with respect to fitness, gait and training characteristics. Results Five participants showed a clear CoTmin at an intermediate speed and a statistically significant (p < 0.05) quadratic CoT-speed function, while the other participants did not show such evidence. Participants were then categorized and compared with respect to the strength of evidence for a CoTmin (ClearCoTmin and NoCoTmin). The ClearCoTmin group displayed significantly higher correlation between speed and cadence; more endurance training and exercise sessions per week; than the NoCoTmin group; and a marginally non-significant but higher aerobic capacity. Some runners still showed a CoTmin at an intermediate speed even after subtraction of resting energy expenditure. Conclusion The findings confirm the existence of an optimal speed for human running, in some but not all participants. Those exhibiting a COTmin undertook a higher volume of running, ran with a cadence that was more consistently modulated with speed, and tended to be aerobically fitter. The ability to minimise the energetic cost of transport appears not to be ubiquitous feature of human running but may emerge in some individuals with extensive running experience.
Resumo:
In this paper a novel controller for stable and precise operation of multi-rotors with heavy slung loads is introduced. First, simplified equations of motions for the multi-rotor and slung load are derived. The model is then used to design a Nonlinear Model Predictive Controller (NMPC) that can manage the highly nonlinear dynamics whilst accounting for system constraints. The controller is shown to simultaneously track specified waypoints whilst actively damping large slung load oscillations. A Linear-quadratic regulator (LQR) controller is also derived, and control performance is compared in simulation. Results show the improved performance of the Nonlinear Model Predictive Control (NMPC) controller over a larger flight envelope, including aggressive maneuvers and large slung load displacements. Computational cost remains relatively small, amenable to practical implementation. Such systems for small Unmanned Aerial Vehicles (UAVs) may provide significant benefit to several applications in agriculture, law enforcement and construction.
Resumo:
BACKGROUND INFORMATION: Evidence has shown that mesenchymal-epithelial transition (MET) and epithelial-mesenchymal transition (EMT) are linked to stem cell properties. We currently lack a model showing how the occurrence of MET and EMT in immortalised cells influences the maintenance of stem cell properties. Thus, we established a project aiming to investigate the roles of EMT and MET in the acquisition of stem cell properties in immortalised oral epithelial cells. RESULTS: In this study, a retroviral transfection vector (pLXSN-hTERT) was used to immortalise oral epithelial cells by insertion of the hTERT gene (hTERT(+)-oral mucosal epithelial cell line [OME]). The protein and RNA expression of EMT transcriptional factors (Snail, Slug and Twist), their downstream markers (E-cadherin and N-cadherin) and embryonic stem cell markers (OCT4, Nanog and Sox2) were studied by reverse transcription PCR and Western blots in these cells. Some EMT markers were detected at both mRNA and protein levels. Adipocytes and bone cells were noted in the multi-differentiation assay, showing that the immortal cells underwent EMT. The differentiation assay for hTERT(+)-OME cells revealed the recovery of epithelial phenotypes, implicating the presence of MET. The stem cell properties were confirmed by the detection of appropriate markers. Altered expression of alpha-tubulin and gamma-tubulin in both two-dimensional-cultured (without serum) and three-dimensional-cultured hTERT(+)-OME spheroids indicated the re-programming of cytoskeleton proteins which is attributed to MET processes in hTERT(+)-OME cells. CONCLUSIONS: EMT and MET are essential for hTERT-immortalised cells to maintain their epithelial stem cell properties.
Resumo:
In recent years, the beauty leaf plant (Calophyllum Inophyllum) is being considered as a potential 2nd generation biodiesel source due to high seed oil content, high fruit production rate, simple cultivation and ability to grow in a wide range of climate conditions. However, however, due to the high free fatty acid (FFA) content in this oil, the potential of this biodiesel feedstock is still unrealized, and little research has been undertaken on it. In this study, transesterification of beauty leaf oil to produce biodiesel has been investigated. A two-step biodiesel conversion method consisting of acid catalysed pre-esterification and alkali catalysed transesterification has been utilized. The three main factors that drive the biodiesel (fatty acid methyl ester (FAME)) conversion from vegetable oil (triglycerides) were studied using response surface methodology (RSM) based on a Box-Behnken experimental design. The factors considered in this study were catalyst concentration, methanol to oil molar ratio and reaction temperature. Linear and full quadratic regression models were developed to predict FFA and FAME concentration and to optimize the reaction conditions. The significance of these factors and their interaction in both stages was determined using analysis of variance (ANOVA). The reaction conditions for the largest reduction in FFA concentration for acid catalysed pre-esterification was 30:1 methanol to oil molar ratio, 10% (w/w) sulfuric acid catalyst loading and 75 °C reaction temperature. In the alkali catalysed transesterification process 7.5:1 methanol to oil molar ratio, 1% (w/w) sodium methoxide catalyst loading and 55 °C reaction temperature were found to result in the highest FAME conversion. The good agreement between model outputs and experimental results demonstrated that this methodology may be useful for industrial process optimization for biodiesel production from beauty leaf oil and possibly other industrial processes as well.
Resumo:
Battery energy storage system (BESS) is to be incorporated in a wind farm to achieve constant power dispatch. The design of the BESS is based on the forecasted wind speed, and the technique assumes the distribution of the error between the forecasted and actual wind speeds is Gaussian. It is then shown that although the error between the predicted and actual wind powers can be evaluated, it is non-Gaussian. With the known distribution in the error of the predicted wind power, the capacity of the BESS can be determined in terms of the confident level in meeting specified constant power dispatch commitment. Furthermore, a short-term power dispatch strategy is also developed which takes into account the state of charge (SOC) of the BESS. The proposed approach is useful in the planning of the wind farm-BESS scheme and in the operational planning of the wind power generating station.