938 resultados para methods of resolution enhancement
Resumo:
Genetic recombination can lead to the formation of intermediates in which DNA molecules are linked by Holliday junctions. Movement of a junction along DNA, by a process known as branch migration, leads to heteroduplex formation, whereas resolution of a junction completes the recombination process. Holliday junctions can be resolved in either of two ways, yielding products in which there has, or has not, been an exchange of flanking markers. The ratio of these products is thought to be determined by the frequency with which the two isomeric forms (conformers) of the Holliday junction are cleaved. Recent studies with enzymes that process Holliday junctions in Escherichia coli, the RuvABC proteins, however, indicate that protein binding causes the junction to adopt an open square-planar configuration. Within such a structure, DNA isomerization can have little role in determining the orientation of resolution. To determine the role that junction-specific protein assembly has in determining resolution bias, a defined in vitro system was developed in which we were able to direct the assembly of the RuvABC resolvasome. We found that the bias toward resolution in one orientation or the other was determined simply by the way in which the Ruv proteins were positioned on the junction. Additionally, we provide evidence that supports current models on RuvABC action in which Holliday junction resolution occurs as the resolvasome promotes branch migration.
Resumo:
The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.
Resumo:
BACKGROUND:: Although the surgical treatment of full-thickness macular hole is well established, the utility of pars plana vitrectomy in the treatment of lamellar macular hole (LMH) remains less clear. The purpose of the study is to report functional results of surgical treatment of LMH associated with epiretinal membrane. METHODS:: Retrospective chart review of patients undergoing pars plana vitrectomy and peeling of epiretinal membrane and internal limiting membrane, with or without air or gas tamponade, for symptomatic LMH associated with epimacular membrane. RESULTS:: Forty-five eyes of 44 patients were operated for LMH associated with epimacular membrane between May 2000 and July 2009. Pars plana vitrectomy and membrane peeling were combined with air or gas tamponade in 43 of 45 cases. Mean logarithm of the minimum angle of resolution best-corrected visual acuity improved from 0.4 preoperatively to 0.13 postoperatively (P < 0.0001). Improvement in visual acuity ranged from 0 Early Treatment Diabetic Retinopathy Study (ETDRS) lines to 8.9 ETDRS lines (mean, 2.65 ETDRS lines). Visual acuity improved by ≥1 ETDRS line(s) in 40 of 45 eyes (89%) and by ≥2 ETDRS lines in 26 of 45 eyes (58%) after the surgical procedure. No patient lost vision. CONCLUSION:: This small retrospective study suggests that surgical treatment of LMH associated with epimacular membrane may improve visual acuity in symptomatic patients.
Resumo:
Purpose:Coats' disease is a non-hereditary condition characterized by idiopathic retinal telangiectasia, and exudative retinopathy. Although the exudation often spreads from the main areas of telangiectasia, there is a preferential accumulation of exudation in the macular area in Coats' disease. A subfoveal nodule has usually been described in the context of resolution of macular exudates after treatment of peripheral retinal telangiectasis. Nevertheless, a recent reports stressed out an uncommon prominent subfoveal nodule with peripheral exudates as initial presentation of Coats'disease. The purpose of this study was to report the prevalence of this presentation in a cohort of patients. Methods:All consecutive patients with Coats' disease referred to the Jules Gonin Eye Hospital between January 1979 and July 2006 were included. All charts were screened for a clear cut subfoveal circular lesion on fundus photographies at initial presentation. Results:95 patients suffering of Coat's disease were enrolled. 33 out of 95 patients had subtotal or total exudative retinal detachment, which impeded macular examination. 14 out of 62 (22.6%) resting patients presented with a clear cut prominent circular subfoveal lesion at initial presentation. All patients had unilateral disease. Mean age was 5.6 ± 3.5 year old at initial presentation. There were 4 females and 10 males. Pigmentation and size of the nodule were not homogenous. Mean diameter was 1.1 ± 0.5 optic disc diameter. Conclusions:The present study shows that subfoveal nodule is not such a rare primary presentation of Coats' disease in contrast to what it has been previously reported in the literature. Thus the initial finding of prominent subfoveal nodule associated with peripheral retinal findings made the diagnosis of Coats' disease highly likely.Physicians should be aware that a proeminent subfoveal nodule is a common initial presentation of Coats' disease as it can be confused clinically with Retinoblastoma.
Resumo:
The contribution of muscle biopsies to the diagnosis of neuromuscular disorders and the indications of various methods of examination are investigated by analysis of 889 biopsies from patients suffering from myopathic and/or neurogenic disorders. Histo-enzymatic studies performed on frozen material as well as immunohistochemistry and electron microscopy allowed to provide specific diagnoses in all the neurogenic disorders (polyneuropathies and motor neuron diseases), whereas one third of myopathies remained uncertain. Confrontation of neuropathological data with the clinical indications for histological investigations shows that muscle biopsies reveal the diagnosis in 25% of the cases (mainly in congenital and metabolic myopathies) and confirm and/or complete the clinical diagnosis in 50%. In the remaining cases with non specific abnormalities neuropathological investigations may help the clinician by excluding well defined neuromuscular disorders. Analysis of performed studies and results of investigations show the contribution and specificity of each method for the diagnosis. Statistical evaluation of this series indicates that cryostat sectioning for histo- and immunochemical and electron microscopy increases the rate of diagnoses of neuromuscular diseases: full investigation was necessary for the diagnosis in 30% of the cases. The interpretation of the wide range of pathological reactions in muscles requires a close cooperation with the clinician.
Resumo:
Road dust is caused by wind entraining fine material from the roadway surface and the main source of Iowa road dust is attrition of carbonate rock used as aggregate. The mechanisms of dust suppression can be considered as two processes: increasing particle size of the surface fines by agglomeration and inhibiting degradation of the coarse material. Agglomeration may occur by capillary tension in the pore water, surfactants that increase bonding between clay particles, and cements that bind the mineral matter together. Hygroscopic dust suppressants such as calcium chloride have short durations of effectiveness because capillary tension is the primary agglomeration mechanism. Somewhat more permanent methods of agglomeration result from chemicals that cement smaller particles into a mat or larger particles. The cements include lignosulfonates, resins, and asphalt products. The duration of the cements depend on their solubility and the climate. The only dust palliative that decreases aggregate degradation is shredded shingles that act as cushions between aggregate particles. It is likely that synthetic polymers also provide some protection against coarse aggregate attrition. Calcium chloride and lignosulfonates are widely used in Iowa. Both palliatives have a useful duration of about 6 months. Calcium chloride is effective with surface soils of moderate fine content and plasticity whereas lignin works best with materials that have high fine content and high plasticity indices. Bentonite appears to be effective for up to two years and works well with surface materials having low fines and plasticity and works well with limestone aggregate. Selection of appropriate dust suppressants should be based on characterization of the road surface material. Estimation of dosage rates for potential palliatives can be based on data from this report, from technical reports, information from reliable vendors, or laboratory screening tests. The selection should include economic analysis of construction and maintenance costs. The effectiveness of the treatment should be evaluated by any of the field performance measuring techniques discussed in this report. Novel dust control agents that need research for potential application in Iowa include; acidulated soybean oil (soapstock), soybean oil, ground up asphalt shingles, and foamed asphalt. New laboratory evaluation protocols to screen additives for potential effectiveness and determine dosage are needed. A modification of ASTM D 560 to estimate the freeze-thaw and wet-dry durability of Portland cement stabilized soils would be a starting point for improved laboratory testing of dust palliatives.
Resumo:
In this paper, some steganalytic techniques designed to detect the existence of hidden messages using histogram shifting methods are presented. Firstly, some techniques to identify specific methods of histogram shifting, based on visible marks on the histogram or abnormal statistical distributions are suggested. Then, we present a general technique capable of detecting all histogram shifting techniques analyzed. This technique is based on the effect of histogram shifting methods on the "volatility" of the histogram of differences and the study of its reduction whenever new data are hidden.
Resumo:
BACKGROUND: PCR has the potential to detect and precisely quantify specific DNA sequences, but it is not yet often used as a fully quantitative method. A number of data collection and processing strategies have been described for the implementation of quantitative PCR. However, they can be experimentally cumbersome, their relative performances have not been evaluated systematically, and they often remain poorly validated statistically and/or experimentally. In this study, we evaluated the performance of known methods, and compared them with newly developed data processing strategies in terms of resolution, precision and robustness. RESULTS: Our results indicate that simple methods that do not rely on the estimation of the efficiency of the PCR amplification may provide reproducible and sensitive data, but that they do not quantify DNA with precision. Other evaluated methods based on sigmoidal or exponential curve fitting were generally of both poor resolution and precision. A statistical analysis of the parameters that influence efficiency indicated that it depends mostly on the selected amplicon and to a lesser extent on the particular biological sample analyzed. Thus, we devised various strategies based on individual or averaged efficiency values, which were used to assess the regulated expression of several genes in response to a growth factor. CONCLUSION: Overall, qPCR data analysis methods differ significantly in their performance, and this analysis identifies methods that provide DNA quantification estimates of high precision, robustness and reliability. These methods allow reliable estimations of relative expression ratio of two-fold or higher, and our analysis provides an estimation of the number of biological samples that have to be analyzed to achieve a given precision.
Resumo:
The objective of this work was to test methods for pre-harvest sprouting assessment in wheat cultivars. Fourteen wheat cultivars were grown in Londrina and Ponta Grossa municipalities, Paraná state, Brazil. They were sampled at 10 and 17 days after physiological maturity and evaluated using the methods of germination by rainfall simulation (in a greenhouse), in-ear grain sprouting, and grains removed from the ears. The in-ear grain sprouting method allowed the differentiation of cultivars, but showed different resistance levels from the available description of cultivars. The sprouting of grain removed from the ears did not allow a reliable distinction of data on germination in any harvest date or location. The method of rainfall simulation is the most suitable for the assessment of cultivars as to pre-harvest sprouting, regardless of the sampling date and evaluated location.
Resumo:
BACKGROUND: Health professionals and policymakers aspire to make healthcare decisions based on the entire relevant research evidence. This, however, can rarely be achieved because a considerable amount of research findings are not published, especially in case of 'negative' results - a phenomenon widely recognized as publication bias. Different methods of detecting, quantifying and adjusting for publication bias in meta-analyses have been described in the literature, such as graphical approaches and formal statistical tests to detect publication bias, and statistical approaches to modify effect sizes to adjust a pooled estimate when the presence of publication bias is suspected. An up-to-date systematic review of the existing methods is lacking. METHODS/DESIGN: The objectives of this systematic review are as follows:âeuro¢ To systematically review methodological articles which focus on non-publication of studies and to describe methods of detecting and/or quantifying and/or adjusting for publication bias in meta-analyses.âeuro¢ To appraise strengths and weaknesses of methods, the resources they require, and the conditions under which the method could be used, based on findings of included studies.We will systematically search Web of Science, Medline, and the Cochrane Library for methodological articles that describe at least one method of detecting and/or quantifying and/or adjusting for publication bias in meta-analyses. A dedicated data extraction form is developed and pilot-tested. Working in teams of two, we will independently extract relevant information from each eligible article. As this will be a qualitative systematic review, data reporting will involve a descriptive summary. DISCUSSION: Results are expected to be publicly available in mid 2013. This systematic review together with the results of other systematic reviews of the OPEN project (To Overcome Failure to Publish Negative Findings) will serve as a basis for the development of future policies and guidelines regarding the assessment and handling of publication bias in meta-analyses.
Resumo:
New methods and devices for pursuing performance enhancement through altitude training were developed in Scandinavia and the USA in the early 1990s. At present, several forms of hypoxic training and/or altitude exposure exist: traditional 'live high-train high' (LHTH), contemporary 'live high-train low' (LHTL), intermittent hypoxic exposure during rest (IHE) and intermittent hypoxic exposure during continuous session (IHT). Although substantial differences exist between these methods of hypoxic training and/or exposure, all have the same goal: to induce an improvement in athletic performance at sea level. They are also used for preparation for competition at altitude and/or for the acclimatization of mountaineers. The underlying mechanisms behind the effects of hypoxic training are widely debated. Although the popular view is that altitude training may lead to an increase in haematological capacity, this may not be the main, or the only, factor involved in the improvement of performance. Other central (such as ventilatory, haemodynamic or neural adaptation) or peripheral (such as muscle buffering capacity or economy) factors play an important role. LHTL was shown to be an efficient method. The optimal altitude for living high has been defined as being 2200-2500 m to provide an optimal erythropoietic effect and up to 3100 m for non-haematological parameters. The optimal duration at altitude appears to be 4 weeks for inducing accelerated erythropoiesis whereas <3 weeks (i.e. 18 days) are long enough for beneficial changes in economy, muscle buffering capacity, the hypoxic ventilatory response or Na(+)/K(+)-ATPase activity. One critical point is the daily dose of altitude. A natural altitude of 2500 m for 20-22 h/day (in fact, travelling down to the valley only for training) appears sufficient to increase erythropoiesis and improve sea-level performance. 'Longer is better' as regards haematological changes since additional benefits have been shown as hypoxic exposure increases beyond 16 h/day. The minimum daily dose for stimulating erythropoiesis seems to be 12 h/day. For non-haematological changes, the implementation of a much shorter duration of exposure seems possible. Athletes could take advantage of IHT, which seems more beneficial than IHE in performance enhancement. The intensity of hypoxic exercise might play a role on adaptations at the molecular level in skeletal muscle tissue. There is clear evidence that intense exercise at high altitude stimulates to a greater extent muscle adaptations for both aerobic and anaerobic exercises and limits the decrease in power. So although IHT induces no increase in VO(2max) due to the low 'altitude dose', improvement in athletic performance is likely to happen with high-intensity exercise (i.e. above the ventilatory threshold) due to an increase in mitochondrial efficiency and pH/lactate regulation. We propose a new combination of hypoxic method (which we suggest naming Living High-Training Low and High, interspersed; LHTLHi) combining LHTL (five nights at 3000 m and two nights at sea level) with training at sea level except for a few (2.3 per week) IHT sessions of supra-threshold training. This review also provides a rationale on how to combine the different hypoxic methods and suggests advances in both their implementation and their periodization during the yearly training programme of athletes competing in endurance, glycolytic or intermittent sports.
Resumo:
The amount of water available is usually restricted, which leads to a situation where a complete understanding of the process, including water circulations and the influence of water components, is essential. The main aim of this thesis was to clarify the possibilities for the efficient use of residual peroxide by means of water circulation rearrangements. Rearranging water circulations and the reduction of water usage may cause new problems, such as metal induced peroxide decomposition that needs to be addressed. This thesis introduces theoretical methods of water circulations to combine two variables; effective utilization of residual peroxide and avoiding manganese in the alkaline peroxide bleaching stage. Results are mainly based on laboratory and mill site experiments concerning the utilization of residual peroxide. A simulation model (BALAS) was used to evaluate the manganese contents and residual peroxide doses. It was shown that with optimum recirculation of residual peroxide the brightness can be improved or chemical costs can be decreased. From the scientific perspective, it was also very important to discover that recycled peroxide was more effective pre-bleaching agent compared to fresh peroxide. This can be due to the organic acids i.e. per acetic acid in wash press filtrate that have been formed in alkaline bleaching stage. Even short retention time was adequate and the activation of residual peroxide using sodium hydroxide was not necessary. There are several possibilities for using residual peroxide in practice regarding bleaching. A typical modern mechanical pulping process line consist of defibering, screening, a disc filter, a bleach press, high consistency (HC) peroxide bleaching and a wash press. Furthermore there usually is not a particular medium consistency (MC) pre-bleaching stage that includes additional thickening equipment. The most advisable way to utilize residual peroxide in this kind of process is to recycle the wash press filtrate to the dilution of disc filter pulp (low MC pre-bleaching stage). An arrangement such as this would be beneficial in terms of the reduced convection of manganese to the alkaline bleaching stage. Manganese originates from wood material and will be removed to the water phase already in the early stages of the process. Recycling residual peroxide prior to the disc filter is not recommended because of low consistencies. Regarding water circulations, the novel point of view is that, it would be beneficial to divide water circulations into two sections and the critical location for the division is the disc filter. Both of these two sections have their own priority. Section one before the disc filter: manganese removal. Section two after the disc filter: brightening of pulp. This division can be carried out if the disc filter pulp is diluted only by wash press filtrate before the MC storage tower. The situation is even better if there is an additional press after the disc filter, which will improve the consistency of the pulp. This has a significant effect on the peroxide concentration in the MC pre-bleaching stage. In terms of manganese content, it is essential to avoid the use of disc filter filtrate in the bleach press and wash press showers. An additional cut-off press would also be beneficial for manganese removal. As a combination of higher initial brightness and lower manganese content, the typical brightness increase varies between approximately 0.5 and 1% ISO units after the alkaline peroxide bleaching stage. This improvement does not seem to be remarkable, but as it is generally known, the final brightness unit is the most expensive and difficult to achieve. The estimation of cost savings is not unambiguous. For example in GW/TMP mill case 0.6% ISO units higher final brightness gave 10% savings in the costs of bleaching chemicals. With an hypothetical 200 000 ton annual production, this means that the mill could save in the costs of bleaching chemicals more than 400 000 euros per year. In general, it can be said that there were no differences between the behavior of different types of processes (GW, PGW, TMP and BCTMP). The enhancement of recycling gave a similar response in all cases. However, we have to remember that the utilization of residual peroxide in older mills depends a great deal on the process equipment, the amount of water available and existing pipeline connections. In summary, it can be said that processes are individual and the same solutions cannot be applied to all cases.
Resumo:
A study of the partial USEPA 3050B and total ISO 14869-1:2001 digestion methods of sediments was performed. USEPA 3050B was recommended as the simpler method with less operational risk. However, the extraction ability of the method should be taken in account for the best environmental interpretation of the results. FAAS was used to quantify metal concentrations in sediment solutions. The alternative use of ICP-OES quantification should be conditioned by a previous detailed investigation and eventual correction of the matrix effect. For the first time, the EID method was employed for the detection and correction of the matrix effect in sediment ICP-OES analysis. Finally, some considerations were made about the level of metal contamination in the area under study.
Resumo:
The purpose of this thesis is to examine the level of customer consciousness of the production process employees in a steel factory and to investigate the methods of internal marketing in order to propose development practices to enhance the customer consciousness of the case company employees. The significance of the level of customer consciousness is discussed and practices already implemented affecting the level of customer consciousness in the company are examined. The literature review gives an insight to the role of customer consciousness in the CRM philosophy of a manufacturing company and examines the means of internal marketing in enhancing customer consciousness. In the empirical part of the study, the level and significance of customer consciousness is determined by conducting individual and focus group interviews. The interviews are also used to examine the practices that could function in enhancing the customer consciousness of the employees. Development suggestions to improve the level of customer consciousness in the production process are given based on the results. The level of customer consciousness is at a poor level in the production process and influences above all on work motivation and job satisfaction, but possibly on customer satisfaction as well. The enhancement of customer consciousness in the production process should be done e.g. by ensuring the distribution of right knowledge coherently to all of the employees, gathering large customer reference database to exploit in work and in training, using visual illustration in presenting the customer information, training proactively and letting the employees to participate in the customer oriented development activities. Customer satisfaction focused reward system can be considered.
Resumo:
New luminometric particle-based methods were developed to quantify protein and to count cells. The developed methods rely on the interaction of the sample with nano- or microparticles and different principles of detection. In fluorescence quenching, timeresolved luminescence resonance energy transfer (TR-LRET), and two-photon excitation fluorescence (TPX) methods, the sample prevents the adsorption of labeled protein to the particles. Depending on the system, the addition of the analyte increases or decreases the luminescence. In the dissociation method, the adsorbed protein protects the Eu(III) chelate on the surface of the particles from dissociation at a low pH. The experimental setups are user-friendly and rapid and do not require hazardous test compounds and elevated temperatures. The sensitivity of the quantification of protein (from 40 to 500 pg bovine serum albumin in a sample) was 20-500-fold better than in most sensitive commercial methods. The quenching method exhibited low protein-to-protein variability and the dissociation method insensitivity to the assay contaminants commonly found in biological samples. Less than ten eukaryotic cells were detected and quantified with all the developed methods under optimized assay conditions. Furthermore, two applications, the method for detection of the aggregation of protein and the cell viability test, were developed by utilizing the TR-LRET method. The detection of the aggregation of protein was allowed at a more than 10,000 times lower concentration, 30 μg/L, compared to the known methods of UV240 absorbance and dynamic light scattering. The TR-LRET method was combined with a nucleic acid assay with cell-impermeable dye to measure the percentage of dead cells in a single tube test with cell counts below 1000 cells/tube.