900 resultados para Single Sign-On
Resumo:
Superconducting thick films of Bi2Sr2CaCu2Oy (Bi-2212) on single-crystalline (100) MgO substrates have been prepared using a doctor-blade technique and a partial-melt process. It is found that the phase composition and the amount of Ag addition to the paste affect the structure and superconducting properties of the partially melted thick films. The optimum heat treatment schedule for obtaining high Jc has been determined for each paste. The heat treatment ensures attainment of high purity for the crystalline Bi-2212 phase and high orientation of Bi-2212 crystals, in which the c-axis is perpendicular to the substrate. The highest Tc, obtained by resistivity measurement, is 92.2 K. The best value for Jct (transport) of these thick films, measured at 77 K in self-field, is 8 × 10 3 Acm -2.
Resumo:
Preliminary data is presented on a detailed statistical analysis of k-factor determination for a single class of minerals (amphiboles) which contain a wide range of element concentrations. These amphiboles are homogeneous, contain few (if any) subsolidus microstructures and can be readily prepared for thin film analysis. In previous studies, element loss during the period of irradiation has been assumed negligible for the determination of k-factors. Since this phenomena may be significant for certain mineral systems, we also report on the effect of temperature on k-factor determination for various elements using small probe sizes (approx.20 nm).
Resumo:
Multilevel converters, because of the benefits they attract in generating high quality output voltage, are used in several applications. Various modulation and control techniques are introduced by several researchers to control the output voltage of the multilevel converters like space vector modulation and harmonic elimination (HE) methods. Multilevel converters may have a DC link with equal or unequal DC voltages. In this study a new HE technique based on the HE method is proposed for multilevel converters with unequal DC link voltage. The DC link voltage levels are considered as additional variables for the HE method and the voltage levels are defined based on the HE results. Increasing the number of voltage levels can reduce lower order harmonic content because of the fact that more variables are created. In comparison to previous methods, this new technique has a positive effect on the output voltage quality by reducing its total harmonic distortion, which must take into consideration for some applications such as uninterruptable power supply, motor drive systems and piezoelectric transducer excitation. In order to verify the proposed modulation technique, MATLAB simulations and experimental tests are carried out for a single-phase four-level diode-clamped converter.
Resumo:
Motivation: Unravelling the genetic architecture of complex traits requires large amounts of data, sophisticated models and large computational resources. The lack of user-friendly software incorporating all these requisites is delaying progress in the analysis of complex traits. Methods: Linkage disequilibrium and linkage analysis (LDLA) is a high-resolution gene mapping approach based on sophisticated mixed linear models, applicable to any population structure. LDLA can use population history information in addition to pedigree and molecular markers to decompose traits into genetic components. Analyses are distributed in parallel over a large public grid of computers in the UK. Results: We have proven the performance of LDLA with analyses of simulated data. There are real gains in statistical power to detect quantitative trait loci when using historical information compared with traditional linkage analysis. Moreover, the use of a grid of computers significantly increases computational speed, hence allowing analyses that would have been prohibitive on a single computer. © The Author 2009. Published by Oxford University Press. All rights reserved.
Resumo:
A novel multiple regression method (RM) is developed to predict identity-by-descent probabilities at a locus L (IBDL), among individuals without pedigree, given information on surrounding markers and population history. These IBDL probabilities are a function of the increase in linkage disequilibrium (LD) generated by drift in a homogeneous population over generations. Three parameters are sufficient to describe population history: effective population size (Ne), number of generations since foundation (T), and marker allele frequencies among founders (p). IBD L are used in a simulation study to map a quantitative trait locus (QTL) via variance component estimation. RM is compared to a coalescent method (CM) in terms of power and robustness of QTL detection. Differences between RM and CM are small but significant. For example, RM is more powerful than CM in dioecious populations, but not in monoecious populations. Moreover, RM is more robust than CM when marker phases are unknown or when there is complete LD among founders or Ne is wrong, and less robust when p is wrong. CM utilises all marker haplotype information, whereas RM utilises information contained in each individual marker and all possible marker pairs but not in higher order interactions. RM consists of a family of models encompassing four different population structures, and two ways of using marker information, which contrasts with the single model that must cater for all possible evolutionary scenarios in CM.
Resumo:
In this study, organoclays were prepared through ion exchange of a single cationic surfactant, hexadecyltrimethylammonium bromide and characterised by a range of methods including X-ray diffraction (XRD) and thermogravimetric analysis. Changes in the surface properties of montmorillonite and the organoclays were observed and the basal spacings of organoclays with and without p-nitrophenol were determined using XRD. The thermal stability of both organoclays were measured using thermogravimetry. As the surfactant loading increased, the expanded basal spacings were observed, and different molecular configurations of surfactant were identified. When the surfactant loading exceeded 1.0 CEC, surfactant molecules tend to adsorb strongly on the clay surface and this resulted in increased affinity to organic compounds. The adsorbed p-nitrophenol and the surfactant decomposed simultaneously. Hence, the surfactant molecules and adsorbed p-nitrophenol are important in determining the thermal stabilities of organoclays. This study enhances the understanding of the structure and adsorption properties of organoclays and has further implications for the application of organoclays as filter materials for the removal of organic pollutants in aqueous solutions.
Resumo:
Bridges are currently rated individually for maintenance and repair action according to the structural conditions of their elements. Dealing with thousands of bridges and the many factors that cause deterioration, makes this rating process extremely complicated. The current simplified but practical methods are not accurate enough. On the other hand, the sophisticated, more accurate methods are only used for a single or particular bridge type. It is therefore necessary to develop a practical and accurate rating system for a network of bridges. The first most important step in achieving this aim is to classify bridges based on the differences in nature and the unique characteristics of the critical factors and the relationship between them, for a network of bridges. Critical factors and vulnerable elements will be identified and placed in different categories. This classification method will be used to develop a new practical rating method for a network of railway bridges based on criticality and vulnerability analysis. This rating system will be more accurate and economical as well as improve the safety and serviceability of railway bridges.
Traffic queue estimation for metered motorway on-ramps through use of loop detector time occupancies
Resumo:
The primary objective of this study is to develop a robust queue estimation algorithm for motorway on-ramps. Real-time queue information is a vital input for dynamic queue management on metered on-ramps. Accurate and reliable queue information enables the management of on-ramp queue in an adaptive manner to the actual traffic queue size and thus minimises the adverse impacts of queue flush while increasing the benefit of ramp metering. The proposed algorithm is developed based on the Kalman filter framework. The fundamental conservation model is used to estimate the system state (queue size) with the flow-in and flow-out measurements. This projection results are updated with the measurement equation using the time occupancies from mid-link and link-entrance loop detectors. This study also proposes a novel single point correction method. This method resets the estimated system state to eliminate the counting errors that accumulate over time. In the performance evaluation, the proposed algorithm demonstrated accurate and reliable performances and consistently outperformed the benchmarked Single Occupancy Kalman filter (SOKF) method. The improvements over SOKF are 62% and 63% in average in terms of the estimation accuracy (MAE) and reliability (RMSE), respectively. The benefit of the innovative concepts of the algorithm is well justified by the improved estimation performance in congested ramp traffic conditions where long queues may significantly compromise the benchmark algorithm’s performance.
Resumo:
The primary objective of this study is to develop a robust queue estimation algorithm for motorway on-ramps. Real-time queue information is the most vital input for a dynamic queue management that can treat long queues on metered on-ramps more sophistically. The proposed algorithm is developed based on the Kalman filter framework. The fundamental conservation model is used to estimate the system state (queue size) with the flow-in and flow-out measurements. This projection results are updated with the measurement equation using the time occupancies from mid-link and link-entrance loop detectors. This study also proposes a novel single point correction method. This method resets the estimated system state to eliminate the counting errors that accumulate over time. In the performance evaluation, the proposed algorithm demonstrated accurate and reliable performances and consistently outperformed the benchmarked Single Occupancy Kalman filter (SOKF) method. The improvements over SOKF are 62% and 63% in average in terms of the estimation accuracy (MAE) and reliability (RMSE), respectively. The benefit of the innovative concepts of the algorithm is well justified by the improved estimation performance in the congested ramp traffic conditions where long queues may significantly compromise the benchmark algorithm’s performance.
Resumo:
The second volume of the Handbook on the Knowledge Economy is a worthy companion to the highly successful original volume published in 2005, extending its theoretical depth and developing its coverage. Together the two volumes provide the single best work and reference point for knowledge economy studies. The second volume with fifteen original essays by renowned scholars in the field, provides insightful and robust analyses of the development potential of the knowledge economy in all its aspects, forms and manifestations.
Resumo:
The question of whether or not there exists a meaningful economic distinction between quits and layoffs has attracted considerable attention. This paper utilizes a recent test proposed by J. S. Cramer and G. Ridder (1991) to test formally whether quits and layoffs may legitimately be aggregated into a single undifferentiated job-mover category. The paper also estimates wage equations for job stayers, quits, and layoffs, corrected for the endogeneity of job mobility. The major results are that quits and lay-off cannot legitimately be pooled and correction for sample selection would appear to be important.
Resumo:
It has not yet been established whether the spatial variation of particle number concentration (PNC) within a microscale environment can have an effect on exposure estimation results. In general, the degree of spatial variation within microscale environments remains unclear, since previous studies have only focused on spatial variation within macroscale environments. The aims of this study were to determine the spatial variation of PNC within microscale school environments, in order to assess the importance of the number of monitoring sites on exposure estimation. Furthermore, this paper aims to identify which parameters have the largest influence on spatial variation, as well as the relationship between those parameters and spatial variation. Air quality measurements were conducted for two consecutive weeks at each of the 25 schools across Brisbane, Australia. PNC was measured at three sites within the grounds of each school, along with the measurement of meteorological and several other air quality parameters. Traffic density was recorded for the busiest road adjacent to the school. Spatial variation at each school was quantified using coefficient of variation (CV). The portion of CV associated with instrument uncertainty was found to be 0.3 and therefore, CV was corrected so that only non-instrument uncertainty was analysed in the data. The median corrected CV (CVc) ranged from 0 to 0.35 across the schools, with 12 schools found to exhibit spatial variation. The study determined the number of required monitoring sites at schools with spatial variability and tested the deviation in exposure estimation arising from using only a single site. Nine schools required two measurement sites and three schools required three sites. Overall, the deviation in exposure estimation from using only one monitoring site was as much as one order of magnitude. The study also tested the association of spatial variation with wind speed/direction and traffic density, using partial correlation coefficients to identify sources of variation and non-parametric function estimation to quantify the level of variability. Traffic density and road to school wind direction were found to have a positive effect on CVc, and therefore, also on spatial variation. Wind speed was found to have a decreasing effect on spatial variation when it exceeded a threshold of 1.5 (m/s), while it had no effect below this threshold. Traffic density had a positive effect on spatial variation and its effect increased until it reached a density of 70 vehicles per five minutes, at which point its effect plateaued and did not increase further as a result of increasing traffic density.
Resumo:
This paper proposes a new method for online secondary path modeling in feedback active noise control (ANC) systems. In practical cases, the secondary path is usually time varying. For these cases, online modeling of secondary path is required to ensure convergence of the system. In literature the secondary path estimation is usually performed offline, prior to online modeling, where in the proposed system there is no need for using offline estimation. The proposed method consists of two steps: a noise controller which is based on an FxLMS algorithm, and a variable step size (VSS) LMS algorithm which is used to adapt the modeling filter with the secondary path. In order to increase performance of the algorithm in a faster convergence and accurate performance, we stop the VSS-LMS algorithm at the optimum point. The results of computer simulation shown in this paper indicate effectiveness of the proposed method.
Resumo:
Objective: To determine the impact of a free-choice diet on nutritional intake and body condition of feral horses. Animals: Cadavers of 41 feral horses from 5 Australian locations. Procedures: Body condition score (BCS) was determined (scale of 1 to 9), and the stomach was removed from horses during postmortem examination. Stomach contents were analyzed for nutritional variables and macroelement and microelement concentrations. Data were compared among the locations and also compared with recommended daily intakes for horses. Results: Mean BCS varied by location; all horses were judged to be moderately thin. The BCS for males was 1 to 3 points higher than that of females. Amount of protein in the stomach contents varied from 4.3% to 14.9% and was significantly associated with BCS. Amounts of water-soluble carbohydrate and ethanol-soluble carbohydrate in stomach contents of feral horses from all 5 locations were higher than those expected for horses eating high-quality forage. Some macroelement and microelement concentrations were grossly excessive, whereas others were grossly deficient. There was no evidence of ill health among the horses. Conclusions and Clinical Relevance: Results suggested that the diet for several populations of feral horses in Australia appeared less than optimal. However, neither low BCS nor trace mineral deficiency appeared to affect survival of the horses. Additional studies on food sources in these regions, including analysis of water-soluble carbohydrate, ethanol-soluble carbohydrate, and mineral concentrations, are warranted to determine the provenance of such rich sources of nutrients. Determination of the optimal diet for horses may need revision.
Resumo:
In the face of changes in corporate regulation scholarship, the percepts of corporate governance and legal policies have minimized the controversies over the potentials and limitations of corporate accountability mechanisms. In the contemporary scholarly works on the implementation of corporate social responsibility (CSR), there are evidences that support CSR principles to be implemented through legal regulation. Scholars and current practices, however, emphasize that this implementation should not be based on any single strategy. From this perspective, this article argues that the regulatory strategies for this implementation should be based on a fusion of legal sanction, market incentives and the demand of private ordering.