945 resultados para Precision Xtra®
Resumo:
Whereas it has been widely assumed in the public that the Soviet music policy system had a “top-down” structure of control and command that directly affected musical creativity, in fact my research shows that the relations between the different levels of the music policy system were vague, and the viewpoints of its representatives differed from each other. Because the representatives of the party and government organs controlling operas could not define which kind of music represented Socialist Realism, the system as it developed during the 1930s and 1940s did not function effectively enough in order to create such a centralised control of Soviet music, still less could Soviet operas fulfil the highly ambiguous aesthetics of Socialist Realism. I show that musical discussions developed as bureaucratic ritualistic arenas, where it became more important to reveal the heretical composers, making scapegoats of them, and requiring them to perform self-criticism, than to give directions on how to reach the artistic goals of Socialist Realism. When one opera was found to be unacceptable, this lead to a strengthening of control by the party leadership, which lead to more operas, one after the other, to be revealed as failures. I have studied the control of the composition, staging and reception of the opera case-studies, which remain obscure in the West despite a growing scholarly interest in them, and have created a detailed picture of the foundation and development of the Soviet music control system in 1932-1950. My detailed discussion of such case-studies as Ivan Dzerzhinskii’s The Quiet Don, Dmitrii Shostakovich’s Lady Macbeth of Mtsensk District, Vano Muradeli’s The Great Friendship, Sergei Prokofiev’s Story of a Real Man, Tikhon Khrennikov’s Frol Skobeev and Evgenii Zhukovskii’s From All One’s Heart backs with documentary precision the historically revisionist model of the development of Soviet music. In February 1948, composers belonging to the elite of the Union of Soviet Composers, e.g. Dmitri Shostakovich and Sergei Prokofiev, were accused in a Central Committee Resolution of formalism, as been under the influence of western modernism. Accusations of formalism were connected to the criticism of the conciderable financial, material and social privileges these composers enjoyed in the leadership of the Union. With my new archival findings I give a more detailed picture of the financial background for the 1948 campaign. The independent position of the music funding organization of the Union of Soviet Composers (Muzfond) to decide on its finances was an exceptional phenomenon in the Soviet Union and contradicted the strivings to strengthen the control of Soviet music. The financial audits of the Union of Soviet Composers did not, however, change the elite status of some of its composers, except for maybe a short duration in some cases. At the same time the independence of the significal financial authorities of Soviet theatres was restricted. The cuts in the governmental funding allocated to Soviet theatres contradicted the intensified ideological demands for Soviet operas.
Resumo:
Organochlorine pesticides (OCPs) are ubiquitous environmental contaminants with adverse impacts on aquatic biota, wildlife and human health even at low concentrations. However, conventional methods for their determination in river sediments are resource intensive. This paper presents an approach that is rapid and also reliable for the detection of OCPs. Accelerated Solvent Extraction (ASE) with in-cell silica gel clean-up followed by Triple Quadrupole Gas Chromatograph Mass Spectrometry (GCMS/MS) was used to recover OCPs from sediment samples. Variables such as temperature, solvent ratio, adsorbent mass and extraction cycle were evaluated and optimised for the extraction. With the exception of Aldrin, which was unaffected by any of the variables evaluated, the recovery of OCPs from sediment samples was largely influenced by solvent ratio and adsorbent mass and, to some extent, the number of cycles and temperature. The optimised conditions for OCPs extraction in sediment with good recoveries were determined to be 4 cycles, 4.5 g of silica gel, 105 ᴼC, and 4:3 v/v DCM: hexane mixture. With the exception of two compounds (α-BHC and Aldrin) whose recoveries were low (59.73 and 47.66 % respectively), the recovery of the other pesticides were in the range 85.35 – 117.97% with precision < 10 % RSD. The method developed significantly reduces sample preparation time, the amount of solvent used, matrix interference, and is highly sensitive and selective.
Resumo:
The increase in global temperature has been attributed to increased atmospheric concentrations of greenhouse gases (GHG), mainly that of CO2. The threat of severe and complex socio-economic and ecological implications of climate change have initiated an international process that aims to reduce emissions, to increase C sinks, and to protect existing C reservoirs. The famous Kyoto protocol is an offspring of this process. The Kyoto protocol and its accords state that signatory countries need to monitor their forest C pools, and to follow the guidelines set by the IPCC in the preparation, reporting and quality assessment of the C pool change estimates. The aims of this thesis were i) to estimate the changes in carbon stocks vegetation and soil in the forests in Finnish forests from 1922 to 2004, ii) to evaluate the applied methodology by using empirical data, iii) to assess the reliability of the estimates by means of uncertainty analysis, iv) to assess the effect of forest C sinks on the reliability of the entire national GHG inventory, and finally, v) to present an application of model-based stratification to a large-scale sampling design of soil C stock changes. The applied methodology builds on the forest inventory measured data (or modelled stand data), and uses statistical modelling to predict biomasses and litter productions, as well as a dynamic soil C model to predict the decomposition of litter. The mean vegetation C sink of Finnish forests from 1922 to 2004 was 3.3 Tg C a-1, and in soil was 0.7 Tg C a-1. Soil is slowly accumulating C as a consequence of increased growing stock and unsaturated soil C stocks in relation to current detritus input to soil that is higher than in the beginning of the period. Annual estimates of vegetation and soil C stock changes fluctuated considerably during the period, were frequently opposite (e.g. vegetation was a sink but soil was a source). The inclusion of vegetation sinks into the national GHG inventory of 2003 increased its uncertainty from between -4% and 9% to ± 19% (95% CI), and further inclusion of upland mineral soils increased it to ± 24%. The uncertainties of annual sinks can be reduced most efficiently by concentrating on the quality of the model input data. Despite the decreased precision of the national GHG inventory, the inclusion of uncertain sinks improves its accuracy due to the larger sectoral coverage of the inventory. If the national soil sink estimates were prepared by repeated soil sampling of model-stratified sample plots, the uncertainties would be accounted for in the stratum formation and sample allocation. Otherwise, the increases of sampling efficiency by stratification remain smaller. The highly variable and frequently opposite annual changes in ecosystem C pools imply the importance of full ecosystem C accounting. If forest C sink estimates will be used in practice average sink estimates seem a more reasonable basis than the annual estimates. This is due to the fact that annual forest sinks vary considerably and annual estimates are uncertain, and they have severe consequences for the reliability of the total national GHG balance. The estimation of average sinks should still be based on annual or even more frequent data due to the non-linear decomposition process that is influenced by the annual climate. The methodology used in this study to predict forest C sinks can be transferred to other countries with some modifications. The ultimate verification of sink estimates should be based on comparison to empirical data, in which case the model-based stratification presented in this study can serve to improve the efficiency of the sampling design.
Resumo:
The rapid increase in genome sequence information has necessitated the annotation of their functional elements, particularly those occurring in the non-coding regions, in the genomic context. Promoter region is the key regulatory region, which enables the gene to be transcribed or repressed, but it is difficult to determine experimentally. Hence an in silico identification of promoters is crucial in order to guide experimental work and to pin point the key region that controls the transcription initiation of a gene. In this analysis, we demonstrate that while the promoter regions are in general less stable than the flanking regions, their average free energy varies depending on the GC composition of the flanking genomic sequence. We have therefore obtained a set of free energy threshold values, for genomic DNA with varying GC content and used them as generic criteria for predicting promoter regions in several microbial genomes, using an in-house developed tool `PromPredict'. On applying it to predict promoter regions corresponding to the 1144 and 612 experimentally validated TSSs in E. coli (50.8% GC) and B. subtilis (43.5% GC) sensitivity of 99% and 95% and precision values of 58% and 60%, respectively, were achieved. For the limited data set of 81 TSSs available for M. tuberculosis (65.6% GC) a sensitivity of 100% and precision of 49% was obtained.
Resumo:
We have evaluated techniques of estimating animal density through direct counts using line transects during 1988-92 in the tropical deciduous forests of Mudumalai Sanctuary in southern India for four species of large herbivorous mammals, namely, chital (Axis axis), sambar (Cervus unicolor), Asian elephant (Elephas maximus) and gaur (Bos gauras). Density estimates derived from the Fourier Series and the Half-Normal models consistently had the lowest coefficient of variation. These two models also generated similar mean density estimates. For the Fourier Series estimator, appropriate cut-off widths for analysing line transect data for the four species are suggested. Grouping data into various distance classes did not produce any appreciable differences in estimates of mean density or their variances, although model fit is generally better when data are placed in fewer groups. The sampling effort needed to achieve a desired precision (coefficient of variation) in the density estimate is derived. A sampling effort of 800 km of transects returned a 10% coefficient of variation on estimate for chital; for the other species a higher effort was needed to achieve this level of precision. There was no statistically significant relationship between detectability of a group and the size of the group for any species. Density estimates along roads were generally significantly different from those in the interior af the forest, indicating that road-side counts may not be appropriate for most species.
Resumo:
Background: The aging population is placing increasing demands on surgical services, simultaneously with a decreasing supply of professional labor and a worsening economic situation. Under growing financial constraints, successful operating room management will be one of the key issues in the struggle for technical efficiency. This study focused on several issues affecting operating room efficiency. Materials and methods: The current formal operating room management in Finland and the use of performance metrics and information systems used to support this management were explored using a postal survey. We also studied the feasibility of a wireless patient tracking system as a tool for managing the process. The reliability of the system as well as the accuracy and precision of its automatically recorded time stamps were analyzed. The benefits of a separate anesthesia induction room in a prospective setting were compared with the traditional way of working, where anesthesia is induced in the operating room. Using computer simulation, several models of parallel processing for the operating room were compared with the traditional model with respect to cost-efficiency. Moreover, international differences in operating room times for two common procedures, laparoscopic cholecystectomy and open lung lobectomy, were investigated. Results: The managerial structure of Finnish operating units was not clearly defined. Operating room management information systems were found to be out-of-date, offering little support to online evaluation of the care process. Only about half of the information systems provided information in real time. Operating room performance was most often measured by the number of procedures in a time unit, operating room utilization, and turnover time. The wireless patient tracking system was found to be feasible for hospital use. Automatic documentation of the system facilitated patient flow management by increasing process transparency via more available and accurate data, while lessening work for staff. Any parallel work flow model was more cost-efficient than the traditional way of performing anesthesia induction in the operating room. Mean operating times for two common procedures differed by 50% among eight hospitals in different countries. Conclusions: The structure of daily operative management of an operating room warrants redefinition. Performance measures as well as information systems require updating. Parallel work flows are more cost-efficient than the traditional induction-in-room model.
Resumo:
In this paper we discuss a new technique to image the surfaces of metallic substrates using field emission from a pointed array of carbon nanotubes (CNTs). We consider a pointed height distribution of the CNT array under a diode configuration with two side gates maintained at a negative potential to obtain a highly intense beam of electrons localized at the center of the array. The CNT array on a metallic substrate is considered as the cathode and the test substrate as the anode. Scanning the test Substrate with the cathode reveals that the field emission current is highly sensitive to the surface features with nanometer resolution. Surface features of semi-circular, triangular and rectangular geometries (projections and grooves) are considered for simulation. This surface scanning/mapping technique can be applied for surface roughness measurements with nanoscale accuracy. micro/nano damage detection, high precision displacement sensors, vibrometers and accelerometers. among other applications.
Resumo:
It has long been argued that better timing precision allowed by satellites like Rossi X-ray Timing Explorer (RXTE) will allow us to measure the orbital eccentricity and the angle of periastron of some of the bright persistent high-mass X-ray binaries (HMXBs) and hence a possible measurement of apsidal motion in these system. Measuring the rate of apsidal motion allows one to estimate the apsidal motion constant of the mass losing companion star and hence allows for the direct testing of the stellar structure models for these giant stars present in the HMXBs. In the present paper, we use the archival RXTE data of two bright persistent sources, namely Cen X-3 and SMC X-1, to measure the very small orbital eccentricity and the angle of periastron. We find that the small variations in the pulse profiles of these sources, rather than the intrinsic time resolution provided by RXTE, limit the accuracy with which we can measure arrival time of the pulses from these sources. This influences the accuracy with which one can measure the orbital parameters, especially the very small eccentricity and the angle of periastron in these sources. The observations of SMC X-1 in the year 2000 were taken during the high-flux state of the source and we could determine the orbital eccentricity and omega using this data set.
Resumo:
A considerable amount of work has been dedicated on the development of analytical solutions for flow of chemical contaminants through soils. Most of the analytical solutions for complex transport problems are closed-form series solutions. The convergence of these solutions depends on the eigen values obtained from a corresponding transcendental equation. Thus, the difficulty in obtaining exact solutions from analytical models encourages the use of numerical solutions for the parameter estimation even though, the later models are computationally expensive. In this paper a combination of two swarm intelligence based algorithms are used for accurate estimation of design transport parameters from the closed-form analytical solutions. Estimation of eigen values from a transcendental equation is treated as a multimodal discontinuous function optimization problem. The eigen values are estimated using an algorithm derived based on glowworm swarm strategy. Parameter estimation of the inverse problem is handled using standard PSO algorithm. Integration of these two algorithms enables an accurate estimation of design parameters using closed-form analytical solutions. The present solver is applied to a real world inverse problem in environmental engineering. The inverse model based on swarm intelligence techniques is validated and the accuracy in parameter estimation is shown. The proposed solver quickly estimates the design parameters with a great precision.
Resumo:
Recent technical advances have enabled for the first time, reliable in vitro culture of prostate cancer samples as prostate cancer organoids. This breakthrough provides the significant possibility of high throughput drug screening covering the spectrum of prostate cancer phenotypes seen clinically. These advances will enable precision medicine to become a reality, allowing patient samples to be screened for effective therapeutics ex vivo, with tailoring of treatments specific to that individual. This will hopefully lead to enhanced clinical outcomes, avoid morbidity due to ineffective therapies and improve the quality of life in men with advanced prostate cancer.
Resumo:
A key challenge of wide area kinematic positioning is to overcome the effects of the varying hardware biases in code signals of the BeiDou system. Based on three geometryfree/ionosphere-free combinations, the elevation-dependent code biases are modelled for all BeiDou satellites. Results from the data sets of 30-day for 5 baselines of 533 to 2545 km demonstrate that the wide-lane (WL) integer-fixing success rates of 98% to 100% can be achieved within 25 min. Under the condition of HDOP of less than 2, the overall RMS statistics show that ionospheric-free WL single-epoch solutions achieve 24 to 50 cm in the horizontal direction. Smoothing processing over the moving window of 20 min reduces the RMS values by a factor of about 2. Considering distance-independent nature, the above results show the potential that reliable and high precision positioning services could be provided in a wide area based on a sparsely distributed ground network.
Resumo:
Skew correction of complex document images is a difficult task. We propose an edge-based connected component approach for robust skew correction of documents with complex layout and content. The algorithm essentially consists of two steps - an 'initialization' step to determine the image orientation from the centroids of the connected components and a 'search' step to find the actual skew of the image. During initialization, we choose two different sets of points regularly spaced across the the image, one from the left to right and the other from top to bottom. The image orientation is determined from the slope between the two succesive nearest neighbors of each of the points in the chosen set. The search step finds succesive nearest neighbors that satisfy the parameters obtained in the initialization step. The final skew is determined from the slopes obtained in the 'search' step. Unlike other connected component based methods, the proposed method does not require any binarization step that generally precedes connected component analysis. The method works well for scanned documents with complex layout of any skew with a precision of 0.5 degrees.
Resumo:
Purpose: To examine the effects of gaze position and optical blur, similar to that used in multifocal corrections, on stepping accuracy for a precision stepping task among older adults. Methods: Nineteen healthy older adults (mean age, 71.6 +/- 8.8 years) with normal vision performed a series of precision stepping tasks onto a fixed target. The stepping tasks were performed using a repeated-measures design for three gaze positions (fixating on the stepping target as well as 30 and 60 cm farther forward of the stepping target) and two visual conditions (best-corrected vision and with +2.50DS blur). Participants' gaze position was tracked using a head-mounted eye tracker. Absolute, anteroposterior, and mediolateral foot placement errors and within-subject foot placement variability were calculated from the locations of foot and floor-mounted retroreflective markers captured by flash photography of the final foot position. Results: Participants made significantly larger absolute and anteroposterior foot placement errors and exhibited greater foot placement variability when their gaze was directed farther forward of the stepping target. Blur led to significantly increased absolute and anteroposterior foot placement errors and increased foot placement variability. Furthermore, blur differentially increased the absolute and anteroposterior foot placement errors and variability when gaze was directed 60 cm farther forward of the stepping target. Conclusions: Increasing gaze position farther ahead from stepping locations and the presence of blur negatively impact the stepping accuracy of older adults. These findings indicate that blur, similar to that used in multifocal corrections, has the potential to increase the risk of trips and falls among older populations when negotiating challenging environments where precision stepping is required, particularly as gaze is directed farther ahead from stepping locations when walking.
Resumo:
The TOTEM experiment at the LHC will measure the total proton-proton cross-section with a precision better than 1%, elastic proton scattering over a wide range in momentum transfer -t= p^2 theta^2 up to 10 GeV^2 and diffractive dissociation, including single, double and central diffraction topologies. The total cross-section will be measured with the luminosity independent method that requires the simultaneous measurements of the total inelastic rate and the elastic proton scattering down to four-momentum transfers of a few 10^-3 GeV^2, corresponding to leading protons scattered in angles of microradians from the interaction point. This will be achieved using silicon microstrip detectors, which offer attractive properties such as good spatial resolution (<20 um), fast response (O(10ns)) to particles and radiation hardness up to 10^14 "n"/cm^2. This work reports about the development of an innovative structure at the detector edge reducing the conventional dead width of 0.5-1 mm to 50-60 um, compatible with the requirements of the experiment.
Resumo:
By detecting leading protons produced in the Central Exclusive Diffractive process, p+p → p+X+p, one can measure the missing mass, and scan for possible new particle states such as the Higgs boson. This process augments - in a model independent way - the standard methods for new particle searches at the Large Hadron Collider (LHC) and will allow detailed analyses of the produced central system, such as the spin-parity properties of the Higgs boson. The exclusive central diffractive process makes possible precision studies of gluons at the LHC and complements the physics scenarios foreseen at the next e+e− linear collider. This thesis first presents the conclusions of the first systematic analysis of the expected precision measurement of the leading proton momentum and the accuracy of the reconstructed missing mass. In this initial analysis, the scattered protons are tracked along the LHC beam line and the uncertainties expected in beam transport and detection of the scattered leading protons are accounted for. The main focus of the thesis is in developing the necessary radiation hard precision detector technology for coping with the extremely demanding experimental environment of the LHC. This will be achieved by using a 3D silicon detector design, which in addition to the radiation hardness of up to 5×10^15 neutrons/cm2, offers properties such as a high signal-to- noise ratio, fast signal response to radiation and sensitivity close to the very edge of the detector. This work reports on the development of a novel semi-3D detector design that simplifies the 3D fabrication process, but conserves the necessary properties of the 3D detector design required in the LHC and in other imaging applications.