8 resultados para Process parameters
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
With the publication of the quality guideline ICH Q9 "Quality Risk Management" by the International Conference on Harmonization, risk management has already become a standard requirement during the life cycle of a pharmaceutical product. Failure mode and effect analysis (FMEA) is a powerful risk analysis tool that has been used for decades in mechanical and electrical industries. However, the adaptation of the FMEA methodology to biopharmaceutical processes brings about some difficulties. The proposal presented here is intended to serve as a brief but nevertheless comprehensive and detailed guideline on how to conduct a biopharmaceutical process FMEA. It includes a detailed 1-to-10-scale FMEA rating table for occurrence, severity, and detectability of failures that has been especially designed for typical biopharmaceutical processes. The application for such a biopharmaceutical process FMEA is widespread. It can be useful whenever a biopharmaceutical manufacturing process is developed or scaled-up, or when it is transferred to a different manufacturing site. It may also be conducted during substantial optimization of an existing process or the development of a second-generation process. According to their resulting risk ratings, process parameters can be ranked for importance and important variables for process development, characterization, or validation can be identified. LAY ABSTRACT: Health authorities around the world ask pharmaceutical companies to manage risk during development and manufacturing of pharmaceuticals. The so-called failure mode and effect analysis (FMEA) is an established risk analysis tool that has been used for decades in mechanical and electrical industries. However, the adaptation of the FMEA methodology to pharmaceutical processes that use modern biotechnology (biopharmaceutical processes) brings about some difficulties, because those biopharmaceutical processes differ from processes in mechanical and electrical industries. The proposal presented here explains how a biopharmaceutical process FMEA can be conducted. It includes a detailed 1-to-10-scale FMEA rating table for occurrence, severity, and detectability of failures that has been especially designed for typical biopharmaceutical processes. With the help of this guideline, different details of the manufacturing process can be ranked according to their potential risks, and this can help pharmaceutical companies to identify aspects with high potential risks and to react accordingly to improve the safety of medicines.
Resumo:
Switzerland implemented a risk-based monitoring of Swiss dairy products in 2002 based on a risk assessment (RA) that considered the probability of exceeding a microbiological limit value set by law. A new RA was launched in 2007 to review and further develop the previous assessment, and to make recommendations for future risk-based monitoring according to current risks. The resulting qualitative RA was designed to ascertain the risk to human health from the consumption of Swiss dairy products. The products and microbial hazards to be considered in the RA were determined based on a risk profile. The hazards included Campylobacter spp., Listeria monocytogenes, Salmonella spp., Shiga toxin-producing Escherichia coli, coagulase-positive staphylococci and Staphylococcus aureus enterotoxin. The release assessment considered the prevalence of the hazards in bulk milk samples, the influence of the process parameters on the microorganisms, and the influence of the type of dairy. The exposure assessment was linked to the production volume. An overall probability was estimated combining the probabilities of release and exposure for each combination of hazard, dairy product and type of dairy. This overall probability represents the likelihood of a product from a certain type of dairy exceeding the microbiological limit value and being passed on to the consumer. The consequences could not be fully assessed due to lack of detailed information on the number of disease cases caused by the consumption of dairy products. The results were expressed as a ranking of overall probabilities. Finally, recommendations for the design of the risk-based monitoring programme and for filling the identified data gaps were given. The aims of this work were (i) to present the qualitative RA approach for Swiss dairy products, which could be adapted to other settings and (ii) to discuss the opportunities and limitations of the qualitative method.
Resumo:
QUESTIONS UNDER STUDY To improve the response of deteriorating patients during their hospital stay, the University Hospital Bern has introduced a Medical Emergency Team (MET). Aim of this retrospective cohort study is to review the preceding factors, patient characteristics, process parameters and their correlation to patient outcomes of MET calls since the introduction of the team. METHODS Data on patient characteristics, parameters related to MET activation and intervention and patient outcomes were evaluated. A Vital Sign Score (VSS), which is defined as the sum of the occurrence of each vital sign abnormalities, was calculated for all physiological parameters pre MET event, during event and correlation with hospital outcomes. RESULTS A total of 1,628 MET calls in 1,317 patients occurred; 262 (19.9%) of patients with MET calls during their hospital stay died. The VSS pre MET event (odds ratio [OR] 1.78, 95% confidence interval [CI] 1.50-2.13; AUROC 0.63; all p <0.0001) and during the MET call (OR 1.60, 95% CI 1.41-1.83; AUROC 0.62; all p <0.0001) were significantly correlated to patient outcomes. A significant increase in MET calls from 5.2 to 16.5 per 1000 hospital admissions (p <0.0001) and a decrease in cardiac arrest calls in the MET perimeter from 1.6 in 2008 to 0.8 per 1000 admissions was observed during the study period (p = 0.014). CONCLUSIONS The VSS is a significant predictor of mortality in patients assessed by the MET. Increasing MET utilisation coincided with a decrease in cardiac arrest calls in the MET perimeter.
Resumo:
Surgical robots have been proposed ex vivo to drill precise holes in the temporal bone for minimally invasive cochlear implantation. The main risk of the procedure is damage of the facial nerve due to mechanical interaction or due to temperature elevation during the drilling process. To evaluate the thermal risk of the drilling process, a simplified model is proposed which aims to enable an assessment of risk posed to the facial nerve for a given set of constant process parameters for different mastoid bone densities. The model uses the bone density distribution along the drilling trajectory in the mastoid bone to calculate a time dependent heat production function at the tip of the drill bit. Using a time dependent moving point source Green's function, the heat equation can be solved at a certain point in space so that the resulting temperatures can be calculated over time. The model was calibrated and initially verified with in vivo temperature data. The data was collected in minimally invasive robotic drilling of 12 holes in four different sheep. The sheep were anesthetized and the temperature elevations were measured with a thermocouple which was inserted in a previously drilled hole next to the planned drilling trajectory. Bone density distributions were extracted from pre-operative CT data by averaging Hounsfield values over the drill bit diameter. Post-operative [Formula: see text]CT data was used to verify the drilling accuracy of the trajectories. The comparison of measured and calculated temperatures shows a very good match for both heating and cooling phases. The average prediction error of the maximum temperature was less than 0.7 °C and the average root mean square error was approximately 0.5 °C. To analyze potential thermal damage, the model was used to calculate temperature profiles and cumulative equivalent minutes at 43 °C at a minimal distance to the facial nerve. For the selected drilling parameters, temperature elevation profiles and cumulative equivalent minutes suggest that thermal elevation of this minimally invasive cochlear implantation surgery may pose a risk to the facial nerve, especially in sclerotic or high density mastoid bones. Optimized drilling parameters need to be evaluated and the model could be used for future risk evaluation.
Resumo:
In order to improve the osseointegration of endosseous implants made from titanium, the structure and composition of the surface were modified. Mirror-polished commercially pure (cp) titanium substrates were coated by the sol-gel process with different oxides: TiO(2), SiO(2), Nb(2)O(5) and SiO(2)-TiO(2). The coatings were physically and biologically characterized. Infrared spectroscopy confirmed the absence of organic residues. Ellipsometry determined the thickness of layers to be approximately 100nm. High resolution scanning electron microscopy (SEM) and atomice force microscopy revealed a nanoporous structure in the TiO(2) and Nb(2)O(5) layers, whereas the SiO(2) and SiO(2)-TiO(2) layers appeared almost smooth. The R(a) values, as determined by white-light interferometry, ranged from 20 to 50nm. The surface energy determined by the sessile-drop contact angle method revealed the highest polar component for SiO(2) (30.7mJm(-2)) and the lowest for cp-Ti and 316L stainless steel (6.7mJm(-2)). Cytocompatibility of the oxide layers was investigated with MC3T3-E1 osteoblasts in vitro (proliferation, vitality, morphology and cytochemical/immunolabelling of actin and vinculin). Higher cell proliferation rates were found in SiO(2)-TiO(2) and TiO(2), and lower in Nb(2)O(5) and SiO(2); whereas the vitality rates increased for cp-Ti and Nb(2)O(5). Cytochemical assays showed that all substrates induced a normal cytoskeleton and well-developed focal adhesion contacts. SEM revealed good cell attachment for all coating layers. In conclusion, the sol-gel-derived oxide layers were thin, pure and nanostructured; consequent different osteoblast responses to those coatings are explained by the mutual action and coadjustment of different interrelated surface parameters.
Resumo:
In the context of expensive numerical experiments, a promising solution for alleviating the computational costs consists of using partially converged simulations instead of exact solutions. The gain in computational time is at the price of precision in the response. This work addresses the issue of fitting a Gaussian process model to partially converged simulation data for further use in prediction. The main challenge consists of the adequate approximation of the error due to partial convergence, which is correlated in both design variables and time directions. Here, we propose fitting a Gaussian process in the joint space of design parameters and computational time. The model is constructed by building a nonstationary covariance kernel that reflects accurately the actual structure of the error. Practical solutions are proposed for solving parameter estimation issues associated with the proposed model. The method is applied to a computational fluid dynamics test case and shows significant improvement in prediction compared to a classical kriging model.
The impact of common versus separate estimation of orbit parameters on GRACE gravity field solutions
Resumo:
Gravity field parameters are usually determined from observations of the GRACE satellite mission together with arc-specific parameters in a generalized orbit determination process. When separating the estimation of gravity field parameters from the determination of the satellites’ orbits, correlations between orbit parameters and gravity field coefficients are ignored and the latter parameters are biased towards the a priori force model. We are thus confronted with a kind of hidden regularization. To decipher the underlying mechanisms, the Celestial Mechanics Approach is complemented by tools to modify the impact of the pseudo-stochastic arc-specific parameters on the normal equations level and to efficiently generate ensembles of solutions. By introducing a time variable a priori model and solving for hourly pseudo-stochastic accelerations, a significant reduction of noisy striping in the monthly solutions can be achieved. Setting up more frequent pseudo-stochastic parameters results in a further reduction of the noise, but also in a notable damping of the observed geophysical signals. To quantify the effect of the a priori model on the monthly solutions, the process of fixing the orbit parameters is replaced by an equivalent introduction of special pseudo-observations, i.e., by explicit regularization. The contribution of the thereby introduced a priori information is determined by a contribution analysis. The presented mechanism is valid universally. It may be used to separate any subset of parameters by pseudo-observations of a special design and to quantify the damage imposed on the solution.
Resumo:
Charcoal analysis was conducted on sediment cores from three lakes to assess the relationship between the area and number of charcoal particles. Three charcoal-size parameters (maximum breadth, maximum length and area) were measured on sediment samples representing various vegetation types, including shrub tundra, boreal forest and temperate forest. These parameters and charcoal size-class distributions do not differ statistically between two sites where the same preparation technique (glycerine pollen slides) was used, but they differ for the same core when different techniques were applied. Results suggest that differences in charcoal size and size-class distribution are mainly caused by different preparation techniques and are not related to vegetation-type variation. At all three sites, the area and number concentrations of charcoal particles are highly correlated in standard pollen slides; 82–83% of the variability of the charcoal-area concentration can be explained by the particle-number concentration. Comparisons between predicted and measured area concentrations show that regression equations linking charcoal number and area concentrations can be used across sites as long as the same pollen-preparation technique is used. Thus it is concluded that it is unnecessary to measure charcoal areas in standard pollen slides – a time-consuming and tedious process.