992 resultados para Constant-weight Codes
Resumo:
Certain binary codes having good autocorrelation properties akin to Barker codes are studied.
Resumo:
This paper provides a critical examination of the taken for granted nature of the codes/guidelines used towards the creation of designed spaces, their social relations with designers, and their agency in designing for people with disabilities. We conducted case studies at three national museums in Canada where we began by questioning societal representations of disability within and through material culture through the potential of actor-network theory where non-human actors have considerable agency. Specifically, our exploration looks into how representations of disability for designing, are interpreted through mediums such as codes, standards and guidelines. We accomplish this through: deep analyses of the museums’ built environments (outdoors and indoors); interviewed curators, architects and designers involved in the creation of the spaces/displays; completed dialoguing while in motion interviews with people who have disabilities within the spaces; and analyzed available documents relating to the creation of the museums. Through analyses of our rich data set involving the mapping of codes/guidelines in their ‘representation’ of disability and their contributions in ‘fixing’ disability, this paper takes an alternative approach to designing for/with disability by aiming to question societal representations of disability within and through material culture.
Resumo:
Key message We detected seven QTLs for 100-grain weight in sorghum using an F 2 population, and delimited qGW1 to a 101-kb region on the short arm of chromosome 1, which contained 13 putative genes. Abstract Sorghum is one of the most important cereal crops. Breeding high-yielding sorghum varieties will have a profound impact on global food security. Grain weight is an important component of grain yield. It is a quantitative trait controlled by multiple quantitative trait loci (QTLs); however, the genetic basis of grain weight in sorghum is not well understood. In the present study, using an F2 population derived from a cross between the grain sorghum variety SA2313 (Sorghum bicolor) and the Sudan-grass variety Hiro-1 (S. bicolor), we detected seven QTLs for 100-grain weight. One of them, qGW1, was detected consistently over 2 years and contributed between 20 and 40 % of the phenotypic variation across multiple genetic backgrounds. Using extreme recombinants from a fine-mapping F3 population, we delimited qGW1 to a 101-kb region on the short arm of chromosome 1, containing 13 predicted gene models, one of which was found to be under purifying selection during domestication. However, none of the grain size candidate genes shared sequence similarity with previously cloned grain weight-related genes from rice. This study will facilitate isolation of the gene underlying qGW1 and advance our understanding of the regulatory mechanisms of grain weight. SSR markers linked to the qGW1 locus can be used for improving sorghum grain yield through marker-assisted selection.
Resumo:
In this paper we investigate the effectiveness of class specific sparse codes in the context of discriminative action classification. The bag-of-words representation is widely used in activity recognition to encode features, and although it yields state-of-the art performance with several feature descriptors it still suffers from large quantization errors and reduces the overall performance. Recently proposed sparse representation methods have been shown to effectively represent features as a linear combination of an over complete dictionary by minimizing the reconstruction error. In contrast to most of the sparse representation methods which focus on Sparse-Reconstruction based Classification (SRC), this paper focuses on a discriminative classification using a SVM by constructing class-specific sparse codes for motion and appearance separately. Experimental results demonstrates that separate motion and appearance specific sparse coefficients provide the most effective and discriminative representation for each class compared to a single class-specific sparse coefficients.
Resumo:
This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.
Resumo:
A ratio transformer method suitable for the measurement of the dielectric constant of highly conducting liquids is described. The resistance between the two plates of the capacitor can be as low as 2 k Omega . In this method variations in this low resistance will not give any error in capacitance measurement. One of the features of this method is the simplicity in balancing the resistance, using a LDR (light dependent resistor), without influencing the independent capacitance measurement. The ratio transformer enables the ground capacitances to be eliminated. The change in leakage inductance of the ratio transformer while changing the ratios is also taken into account. The capacitance of a dielectric cell of the order of 50 pF can be measured from 1000 Hz to 100 kHz with a resolution of 0.06 pF. The electrode polarisation problem is also discussed.
Resumo:
A method that yields optical Barker codes of smallest known lengths for given discrimination is described.
Resumo:
Certain binary codes having good autocorrelation properties akin to Barker codes are studied.
Resumo:
The damping capacity of cast graphitic aluminum alloy composites has been measured using a torsion pendulum at a constant strain amplitude. It was found that flake-graphite particles dispersed in the matrix of aluminum alloys increased the damping capacity; the improvement was greater, the higher the amount of graphite dispersed in the matrix. At sufficiently high graphite contents the damping capacity of graphitic aluminum composites approaches that of cast iron. The ratio between the damping capacity and the density of graphitic aluminum alloys is higher than cast iron, making them very attractive as light-weight, high-damping materials for possible aircraft applications. Machinability tests on graphite particle-aluminum composites, conducted at speeds of 315 sfm and 525 sfm, showed that the chip length decreased with the amount of graphite of a given size. When the size of graphite was decreased, at a given machining speed, the chip length decreased. Metallographic examination shows that graphite particles act as chip breakers, and are frequently sheared parallel to the plane of the
Resumo:
A galactose-specific protein (RC1) isolated from Ricinus communis beans was found to give a precipitin reaction with concanavalin A. Its carbohydrate content amounted to 8–9% of the total protein and was found to be rich in mannose. The interaction of RC1 with galactose and lactose was measured in 0.05 M phosphate buffer containing 0.2 M NaCl (pH 6.8) by the method of conventional equilibrium dialysis. From the analysis of the binding data according to Scatchard method the association constant (Ka) at 5°C was calculated as 3.8 mM−1 and 1.2 mM−1 for lactose and galactose, respectively. In both cases the number of binding sites per molecule of RC1 with molecular weight of 120000 was found to be 2. From the temperature-dependent Ka values for the binding of lactose, the values of –5.7 kcal/mol and –4.3 cal × mol−1× K−1 were calculated for ΔH and ΔS, respectively. The addition of concanavalin A to RC1 or vice versa led to the formation of the insoluble complex RC1· ConA4 containing one molecule of RC1 and one molecule of tetrameric concanavalin A (ConA4) which could be dissociated upon addition of concanavalin A-specific sugars. The complex formation results in a time-dependent appearance of turbidity in the time range from 10s to 10 min. From the measurement of the time-dependent appearance and disappearance of the turbidity the formation (kf) and dissociation (kd) rate constants were calculated as 3 mM−1× s−1 and 0.07 ks−1 respectively. The ratio kf/kd (43μM −1), that corresponds to the association constant of complex RC1· ConA4, is higher than that of mannoside · ConA4 and thereby suggests that protein-protein interaction contributes significantly in stabilising glycoprotein · lectin complexes. The relevance of this finding to the understanding of the chemical specificities that are involved in a model cell-lectin interaction is discussed.
Resumo:
A constant volume window bomb has been used to measure the characteristic velocity (c*) of rocket propellants. Analysis of the combustion process inside the bomb including heat losses has been made. The experiments on double base and composite propellants have revealed some (i) basic heat transfer aspects inside the bomb and (ii) combustion characteristics of Ammonium Perchlorate-Polyester propellants. It has been found that combustion continues even beyond the peak pressure and temperature points. Lithium Fluoride mixed propellants do not seem to indicate significant differences in c*) though the low pressure deflagration limit is increased with percentage of Lithium Fluoride.
Resumo:
The relation between optical Barker codes and self-orthogonal convolutional codes is pointed out. It is then used to update the results in earlier publication.
Resumo:
Volatility is central in options pricing and risk management. It reflects the uncertainty of investors and the inherent instability of the economy. Time series methods are among the most widely applied scientific methods to analyze and predict volatility. Very frequently sampled data contain much valuable information about the different elements of volatility and may ultimately reveal the reasons for time varying volatility. The use of such ultra-high-frequency data is common to all three essays of the dissertation. The dissertation belongs to the field of financial econometrics. The first essay uses wavelet methods to study the time-varying behavior of scaling laws and long-memory in the five-minute volatility series of Nokia on the Helsinki Stock Exchange around the burst of the IT-bubble. The essay is motivated by earlier findings which suggest that different scaling laws may apply to intraday time-scales and to larger time-scales, implying that the so-called annualized volatility depends on the data sampling frequency. The empirical results confirm the appearance of time varying long-memory and different scaling laws that, for a significant part, can be attributed to investor irrationality and to an intraday volatility periodicity called the New York effect. The findings have potentially important consequences for options pricing and risk management that commonly assume constant memory and scaling. The second essay investigates modelling the duration between trades in stock markets. Durations convoy information about investor intentions and provide an alternative view at volatility. Generalizations of standard autoregressive conditional duration (ACD) models are developed to meet needs observed in previous applications of the standard models. According to the empirical results based on data of actively traded stocks on the New York Stock Exchange and the Helsinki Stock Exchange the proposed generalization clearly outperforms the standard models and also performs well in comparison to another recently proposed alternative to the standard models. The distribution used to derive the generalization may also prove valuable in other areas of risk management. The third essay studies empirically the effect of decimalization on volatility and market microstructure noise. Decimalization refers to the change from fractional pricing to decimal pricing and it was carried out on the New York Stock Exchange in January, 2001. The methods used here are more accurate than in the earlier studies and put more weight on market microstructure. The main result is that decimalization decreased observed volatility by reducing noise variance especially for the highly active stocks. The results help risk management and market mechanism designing.
Resumo:
Abstract is not available.