906 resultados para Faults detection and location
Resumo:
Two simple, rapid and cost-effective methods based on titrimetric and spectrophotometric techniques are described for the assay of RNH in bulk drug and in dosage forms using silver nitrate, mercury(II)thiocyanate and iron(III)nitrate as reagents. In titrimetry, an aqueous solution of RNH is treated with measured excess of silver nitrate in HNO3 medium, followed by determination of unreacted silver nitrate by Volhard method using iron(III) alum indicator. Spectrophotometric method involve the addition a known excess of mercury(II)thiocyanate and iron(III)nitrate to RNH, followed by the measurement of the absorbance of iron(III)thiocyante complex at 470 nm. Titrimetric method is applicable over 4-30 mg range and the reaction stoichiometry is found to be 1:1 (RNH: AgNO3). In the spectrophotometric method, the absorbance is found to increase linearly with concentration of RNH which is corroborated by the correlation coefficient of 0.9959. The system obey Beer's law for 5-70 µg mL-1. The calculated apparent molar absorptivity and sandell sensitivity values are found to be 3.27 ´ 10³ L mol-1 cm-1, 0.107 µg cm-2 respectively. The limits of detection and quantification are also reported for the spectrophotometric method. Intra-day and inter-day precision and accuracy of the methods were evaluated as per ICH guidelines. The methods were successfully applied to the assay of RNH in formulations and the results were compared with those of a reference method by applying Student's t and F-tests. No interference was observed from common pharmaceutical excipients. The accuracy of the methods was further ascertained by performing recovery tests by standard addition method.
Resumo:
Coronary artery disease (CAD) is a chronic process that evolves over decades and may culminate in myocardial infarction (MI). While invasive coronary angiography (ICA) is still considered the gold standard of imaging CAD, non-invasive assessment of both the vascular anatomy and myocardial perfusion has become an intriguing alternative. In particular, computed tomography (CT) and positron emission tomography (PET) form an attractive combination for such studies. Increased radiation dose is, however, a concern. Our aim in the current thesis was to test novel CT and PET techniques alone and in hybrid setting in the detection and assessment of CAD in clinical patients. Along with diagnostic accuracy, methods for the reduction of the radiation dose was an important target. The study investigating the coronary arteries of patients with atrial fibrillation (AF) showed that CAD may be an important etiology of AF because a high prevalence of CAD was demonstrated within AF patients. In patients with suspected CAD, we demonstrated that a sequential, prospectively ECG-triggered CT technique was applicable to nearly 9/10 clinical patients and the radiation dose was over 60% lower than with spiral CT. To detect the functional significance of obstructive CAD, a novel software for perfusion quantification, CarimasTM, showed high reproducibility with 15O-labelled water in PET, supporting feasibility and good clinical accuracy. In a larger cohort of 107 patients with moderate 30-70% pre-test probability of CAD, hybrid PET/CT was shown to be a powerful diagnostic method in the assessment of CAD with diagnostic accuracy comparable to that of invasive angiography and fractional flow reserve (FFR) measurements. A hybrid study may be performed with a reasonable radiation dose in a vast majority of the cases, improving the performance of stand-alone PET and CT angiography, particularly when the absolute quantification of the perfusion is employed. These results can be applied into clinical practice and will be useful for daily clinical diagnosis of CAD.
Resumo:
Background: Eating disorders are serious psychiatric disorders, which usually have their onset in adolescence. Body dissatisfaction and dieting, both common among adolescents, are recognised risk factors for eating disorders. The aim of the present study was to assess the prevalence of eating disorders in the general adolescent population, assess the risk of developing eating disorders in subgroups of dieters, and analyse longitudinal concomitants of incorrect weight perception. Method: A prospective follow-up study on 595 adolescents, aged 15 at baseline, was conducted in western Finland. The study comprised questionnaires directed at the whole study population and subsequent personal interviews with adolescents found to be screen-positive for eating disorders, at both baseline and three-year follow-up. Results: The lifetime prevalence rates for 18 year old females were 2.6 % for anorexia nervosa, 0.4 for bulimia nervosa, and 9.0 % for eating disorder not otherwise specified (EDNOS). No prevalent case of DSM-IV eating disorders was found among the male participants. Eating disorders, as well as depressive symptoms, social anxiety, and low self-esteem, was more prevalent among females who perceived themselves as being overweight, despite being normal or underweight, when compared to females with a correct weight perception. An incorrect weight perception was associated in males with social anxiety. Female adolescents dieting due to psychological distress, rather than vanity or overweight, had a fifteen-fold risk of developing an eating disorder. Conclusions: Eating disorders are common among female adolescents, and adolescents choosing to diet due to psychological distress show a markedly increased risk of developing an eating disorder. Promotion of general well-being as well as the prevention of body dissatisfaction and misdirected dieting, accompanied by early detection and proper treatment of eating disorders, is needed to reduce the incidence of and facilitate recovery in adolescents suffering from eating disorders.
Resumo:
With the intense debate, in Brazil, between landowners and public agencies about the amount of area with forest cover needed in different regions, there is an increase of the need for provision of technical data used as a basis for decision making. One of the criteria to evaluate the effect of forest cover in protecting water resources is the soil loss, which leads to several consequences on the environment, including the silting of the rivers. Therefore, this study aimed to evaluate the reduction in the soil loss in micro watersheds with different reliefs, size and location of forest cover, in the Corumbataí River watershed, in the state of São Paulo, using the Revised Universal Soil Loss Equation (RUSLE) in a GIS environment. For this study, 18 watersheds in three degrees of slope were selected, and 20 scenarios for land-use were established, by analyzing the influence of the PPA size, and the size and the location of the Legal Reserve. The results showed that: a) the effect of forest cover in reducing annual soil loss varies depending on the average slope of the watershed; b) the PPA width must be determined taking into account the slope of the watershed; c) the Legal Reserve must be located along the PPA. These provide better results in reducing annual soil loss.
Resumo:
Prostate-specific antigen (PSA) is a marker that is commonly used in estimating prostate cancer risk. Prostate cancer is usually a slowly progressing disease, which might not cause any symptoms whatsoever. Nevertheless, some cases of cancer are aggressive and need to be treated before they become life-threatening. However, the blood PSA concentration may rise also in benign prostate diseases and using a single total PSA (tPSA) measurement to guide the decision on further examinations leads to many unnecessary biopsies, over-detection, and overtreatment of indolent cancers which would not require treatment. Therefore, there is a need for markers that would better separate cancer from benign disorders, and would also predict cancer aggressiveness. The aim of this study was to evaluate whether intact and nicked forms of free PSA (fPSA-I and fPSA-N) or human kallikrein-related peptidase 2 (hK2) could serve as new tools in estimating prostate cancer risk. First, the immunoassays for fPSA-I and free and total hK2 were optimized so that they would be less prone to assay interference caused by interfering factors present in some blood samples. The optimized assays were shown to work well and were used to study the marker concentrations in the clinical sample panels. The marker levels were measured from preoperative blood samples of prostate cancer patients scheduled for radical prostatectomy. The association of the markers with the cancer stage and grade was studied. It was found that among all tested markers and their combinations especially the ratio of fPSA-N to tPSA and ratio of free PSA (fPSA) to tPSA were associated with both cancer stage and grade. They might be useful in predicting the cancer aggressiveness, but further follow-up studies are necessary to fully evaluate the significance of the markers in this clinical setting. The markers tPSA, fPSA, fPSA-I and hK2 were combined in a statistical model which was previously shown to be able to reduce unnecessary biopsies when applied to large screening cohorts of men with elevated tPSA. The discriminative accuracy of this model was compared to models based on established clinical predictors in reference to biopsy outcome. The kallikrein model and the calculated fPSA-N concentrations (fPSA minus fPSA-I) correlated with the prostate volume and the model, when compared to the clinical models, predicted prostate cancer in biopsy equally well. Hence, the measurement of kallikreins in a blood sample could be used to replace the volume measurement which is time-consuming, needs instrumentation and skilled personnel and is an uncomfortable procedure. Overall, the model could simplify the estimation of prostate cancer risk. Finally, as the fPSA-N seems to be an interesting new marker, a direct immunoassay for measuring fPSA-N concentrations was developed. The analytical performance was acceptable, but the rather complicated assay protocol needs to be improved until it can be used for measuring large sample panels.
Resumo:
Companies are increasingly under pressure to be more efficient both in terms of costs and overall performance and thus, they seek new ways to develop their products and innovate. For pharmaceutical industry it can take several decades to launch a new drug to the markets. Since pharmaceutical industry is one of the most research-intensive industries, is outsourcing one way to enhance the R&D processes of such companies. It is said that outsourcing to offshore locations is vastly more challenging and complicated than any other exporting activity or inter-company relationship that has evoked a lot of discussion. By outsourcing strategically, companies must also thoroughly focus on transaction costs and core competences. Today, the suppliers are looked for beyond national boundaries and furthermore, the location of the outsourcing activity must also be thoroughly considered. Consequently, the purpose of this study is to analyze what is known of strategic outsourcing of pharmaceutical R&D to India. In order to meet the purpose of the study, this study tries to answer three sub-questions set to it: first, what is strategic outsourcing, second, why pharmaceutical companies utilize strategic outsourcing of R&D and last, why pharmaceutical companies select India as the location for outsourcing their R&D. The study is a qualitative study. The purpose of the study was approached by a literature review with systematic elements and sub-questions were analyzed through different relevant theories, such as theory of transaction costs, core competences and location advantages. Applicable academic journal articles were comprehensively included in the study. The data was collected from electronic journal article databases using key words and almost only peer-reviewed, as new as possible articles were included. Also both the reference list of the included articles and article recommendations from professionals generated more articles for inclusion. The data was analyzed through thematization that resulted in themes that illuminate the purpose of the study and sub-questions. As an outcome of the analysis, each of the theory chapters in the study represents one sub-question. The literature used in this study revealed that strategic outsourcing of R&D is increasingly used in pharmaceutical industry and the major motives to practice it has to do with lowering costs, accessing skilled labor, resources and knowledge and enhancing their quality while speeding up the introduction of new drugs. Mainly for the above-mentioned motives India is frequently chosen as the target location for pharma outsourcers. Still, the literature is somewhat incomplete in this complex phenomenon and more research is needed.
Resumo:
This paper investigates defect detection methodologies for rolling element bearings through vibration analysis. Specifically, the utility of a new signal processing scheme combining the High Frequency Resonance Technique (HFRT) and Adaptive Line Enhancer (ALE) is investigated. The accelerometer is used to acquire data for this analysis, and experimental results have been obtained for outer race defects. Results show the potential effectiveness of the signal processing technique to determine both the severity and location of a defect. The HFRT utilizes the fact that much of the energy resulting from a defect impact manifests itself in the higher resonant frequencies of a system. Demodulation of these frequency bands through use of the envelope technique is then employed to gain further insight into the nature of the defect while further increasing the signal to noise ratio. If periodic, the defect frequency is then present in the spectra of the enveloped signal. The ALE is used to enhance the envelope spectrum by reducing the broadband noise. It provides an enhanced envelope spectrum with clear peaks at the harmonics of a characteristic defect frequency. It is implemented by using a delayed version of the signal and the signal itself to decorrelate the wideband noise. This noise is then rejected by the adaptive filter that is based upon the periodic information in the signal. Results have been obtained for outer race defects. They show the effectiveness of the methodology to determine both the severity and location of a defect. In two instances, a linear relationship between signal characteristics and defect size is indicated.
Resumo:
The experimental technique used for detection of subcooled boiling through analysis of the fluctuation contained in pressure transducer signals is presented. This work was partly conducted at the Institut für Kerntechnik und zertörungsfreie Prüfverfahren von Hannover (IKPH, Germany) in a thermal-hydraulic circuit with one electrically heated rod with annular geometry test section. Piezoresistive pressure sensors are used for onset of nucleate boiling (ONB) and onset of fully developed boiling (OFDB) detection using spectral analysis/ signal correlation techniques. Experimental results are interpreted by phenomenological analysis of these two points and compared with existing correlation. The results allow us to conclude that this technique is adequate for the detection and monitoring of the ONB and OFDB.
Resumo:
Although the concept of multi-products biorefinery provides an opportunity to meet the future demands for biofuels, biomaterials or chemicals, it is not assured that its implementation would improve the profitability of kraft pulp mills. The attractiveness will depend on several factors such as mill age and location, government incentives, economy of scale, end user requirements, and how much value can be added to the new products. In addition, the effective integration of alternative technologies is not straightforward and has to be carefully studied. In this work, detailed balances were performed to evaluate possible impacts that lignin removal, hemicelluloses recovery prior to pulping, torrefaction and pyrolysis of wood residues cause on the conventional mill operation. The development of mill balances was based on theoretical fundamentals, practical experience, literature review, personal communication with technology suppliers and analysis of mill process data. Hemicelluloses recovery through pre-hydrolysis of chips leads to impacts in several stages of the kraft process. Effects can be observed on the pulping process, wood consumption, black liquor properties and, inevitably, on the pulp quality. When lignin is removed from black liquor, it will affect mostly the chemical recovery operation and steam generation rate. Since mineral acid is used to precipitate the lignin, impacts on the mill chemical balance are also expected. A great advantage of processing the wood residues for additional income results from the fact that the pulping process, pulp quality and sales are not harmfully affected. For pulp mills interested in implementing the concept of multi-products biorefinery, this work has indicated possible impacts to be considered in a technical feasibility study.
Resumo:
Bioprocess technology is a multidisciplinary industry that combines knowledge of biology and chemistry with process engineering. It is a growing industry because its applications have an important role in the food, pharmaceutical, diagnostics and chemical industries. In addition, the current pressure to decrease our dependence on fossil fuels motivates new, innovative research in the replacement of petrochemical products. Bioprocesses are processes that utilize cells and/or their components in the production of desired products. Bioprocesses are already used to produce fuels and chemicals, especially ethanol and building-block chemicals such as carboxylic acids. In order to enable more efficient, sustainable and economically feasible bioprocesses, the raw materials must be cheap and the bioprocesses must be operated at optimal conditions. It is essential to measure different parameters that provide information about the process conditions and the main critical process parameters including cell density, substrate concentrations and products. In addition to offline analysis methods, online monitoring tools are becoming increasingly important in the optimization of bioprocesses. Capillary electrophoresis (CE) is a versatile analysis technique with no limitations concerning polar solvents, analytes or samples. Its resolution and efficiency are high in optimized methods creating a great potential for rapid detection and quantification. This work demonstrates the potential and possibilities of CE as a versatile bioprocess monitoring tool. As a part of this study a commercial CE device was modified for use as an online analysis tool for automated monitoring. The work describes three offline CE analysis methods for the determination of carboxylic, phenolic and amino acids that are present in bioprocesses, and an online CE analysis method for the monitoring of carboxylic acid production during bioprocesses. The detection methods were indirect and direct UV, and laser-induced frescence. The results of this work can be used for the optimization of bioprocess conditions, for the development of more robust and tolerant microorganisms, and to study the dynamics of bioprocesses.
Resumo:
The objective of this study was to optimize and validate the solid-liquid extraction (ESL) technique for determination of picloram residues in soil samples. At the optimization stage, the optimal conditions for extraction of soil samples were determined using univariate analysis. Ratio soil/solution extraction, type and time of agitation, ionic strength and pH of extraction solution were evaluated. Based on the optimized parameters, the following method of extraction and analysis of picloram was developed: weigh 2.00 g of soil dried and sieved through a sieve mesh of 2.0 mm pore, add 20.0 mL of KCl concentration of 0.5 mol L-1, shake the bottle in the vortex for 10 seconds to form suspension and adjust to pH 7.00, with alkaline KOH 0.1 mol L-1. Homogenate the system in a shaker system for 60 minutes and then let it stand for 10 minutes. The bottles are centrifuged for 10 minutes at 3,500 rpm. After the settlement of the soil particles and cleaning of the supernatant extract, an aliquot is withdrawn and analyzed by high performance liquid chromatography. The optimized method was validated by determining the selectivity, linearity, detection and quantification limits, precision and accuracy. The ESL methodology was efficient for analysis of residues of the pesticides studied, with percentages of recovery above 90%. The limits of detection and quantification were 20.0 and 66.0 mg kg-1 soil for the PVA, and 40.0 and 132.0 mg kg-1 soil for the VLA. The coefficients of variation (CV) were equal to 2.32 and 2.69 for PVA and TH soils, respectively. The methodology resulted in low organic solvent consumption and cleaner extracts, as well as no purification steps for chromatographic analysis were required. The parameters evaluated in the validation process indicated that the ESL methodology is efficient for the extraction of picloram residues in soils, with low limits of detection and quantification.
Resumo:
This thesis researches automatic traffic sign inventory and condition analysis using machine vision and pattern recognition methods. Automatic traffic sign inventory and condition analysis can be used to more efficient road maintenance, improving the maintenance processes, and to enable intelligent driving systems. Automatic traffic sign detection and classification has been researched before from the viewpoint of self-driving vehicles, driver assistance systems, and the use of signs in mapping services. Machine vision based inventory of traffic signs consists of detection, classification, localization, and condition analysis of traffic signs. The produced machine vision system performance is estimated with three datasets, from which two of have been been collected for this thesis. Based on the experiments almost all traffic signs can be detected, classified, and located and their condition analysed. In future, the inventory system performance has to be verified in challenging conditions and the system has to be pilot tested.
Resumo:
The present paper reviews the application of patch-clamp principles to the detection and measurement of macromolecular translocation along the nuclear pores. We demonstrate that the tight-seal 'gigaseal' between the pipette tip and the nuclear membrane is possible in the presence of fully operational nuclear pores. We show that the ability to form a gigaseal in nucleus-attached configurations does not mean that only the activity of channels from the outer membrane of the nuclear envelope can be detected. Instead, we show that, in the presence of fully operational nuclear pores, it is likely that the large-conductance ion channel activity recorded derives from the nuclear pores. We conclude the technical section with the suggestion that the best way to demonstrate that the nuclear pores are responsible for ion channel activity is by showing with fluorescence microscopy the nuclear translocation of ions and small molecules and the exclusion of the same from the cisterna enclosed by the two membranes of the envelope. Since transcription factors and mRNAs, two major groups of nuclear macromolecules, use nuclear pores to enter and exit the nucleus and play essential roles in the control of gene activity and expression, this review should be useful to cell and molecular biologists interested in understanding how patch-clamp can be used to quantitate the translocation of such macromolecules into and out of the nucleus
Resumo:
A liquid phase blocking ELISA (LPB-ELISA) was developed for the detection and measurement of antibodies against infectious bronchitis virus (IBV). The purified and nonpurified virus used as antigen, the capture and detector antibodies, and the chicken hyperimmune sera were prepared and standardized for this purpose. A total of 156 sera from vaccinated and 100 from specific pathogen-free chickens with no recorded contact with the virus were tested. The respective serum titers obtained in the serum neutralization test (SNT) were compared with those obtained in the LPB-ELISA. There was a high correlation (r2 = 0.8926) between the two tests. The LPB-ELISA represents a single test suitable for the rapid detection of antibodies against bronchitis virus in chicken sera, with good sensitivity (88%), specificity (100%) and agreement (95.31%).
Resumo:
Preparative liquid chromatography is one of the most selective separation techniques in the fine chemical, pharmaceutical, and food industries. Several process concepts have been developed and applied for improving the performance of classical batch chromatography. The most powerful approaches include various single-column recycling schemes, counter-current and cross-current multi-column setups, and hybrid processes where chromatography is coupled with other unit operations such as crystallization, chemical reactor, and/or solvent removal unit. To fully utilize the potential of stand-alone and integrated chromatographic processes, efficient methods for selecting the best process alternative as well as optimal operating conditions are needed. In this thesis, a unified method is developed for analysis and design of the following singlecolumn fixed bed processes and corresponding cross-current schemes: (1) batch chromatography, (2) batch chromatography with an integrated solvent removal unit, (3) mixed-recycle steady state recycling chromatography (SSR), and (4) mixed-recycle steady state recycling chromatography with solvent removal from fresh feed, recycle fraction, or column feed (SSR–SR). The method is based on the equilibrium theory of chromatography with an assumption of negligible mass transfer resistance and axial dispersion. The design criteria are given in general, dimensionless form that is formally analogous to that applied widely in the so called triangle theory of counter-current multi-column chromatography. Analytical design equations are derived for binary systems that follow competitive Langmuir adsorption isotherm model. For this purpose, the existing analytic solution of the ideal model of chromatography for binary Langmuir mixtures is completed by deriving missing explicit equations for the height and location of the pure first component shock in the case of a small feed pulse. It is thus shown that the entire chromatographic cycle at the column outlet can be expressed in closed-form. The developed design method allows predicting the feasible range of operating parameters that lead to desired product purities. It can be applied for the calculation of first estimates of optimal operating conditions, the analysis of process robustness, and the early-stage evaluation of different process alternatives. The design method is utilized to analyse the possibility to enhance the performance of conventional SSR chromatography by integrating it with a solvent removal unit. It is shown that the amount of fresh feed processed during a chromatographic cycle and thus the productivity of SSR process can be improved by removing solvent. The maximum solvent removal capacity depends on the location of the solvent removal unit and the physical solvent removal constraints, such as solubility, viscosity, and/or osmotic pressure limits. Usually, the most flexible option is to remove solvent from the column feed. Applicability of the equilibrium design for real, non-ideal separation problems is evaluated by means of numerical simulations. Due to assumption of infinite column efficiency, the developed design method is most applicable for high performance systems where thermodynamic effects are predominant, while significant deviations are observed under highly non-ideal conditions. The findings based on the equilibrium theory are applied to develop a shortcut approach for the design of chromatographic separation processes under strongly non-ideal conditions with significant dispersive effects. The method is based on a simple procedure applied to a single conventional chromatogram. Applicability of the approach for the design of batch and counter-current simulated moving bed processes is evaluated with case studies. It is shown that the shortcut approach works the better the higher the column efficiency and the lower the purity constraints are.