982 resultados para No Exit


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Addressing current limitations of state-of-the-art instrumentation in aerosol research, the aim of this work was to explore and assess the applicability of a novel soft ionization technique, namely flowing atmospheric-pressure afterglow (FAPA), for the mass spectrometric analysis of airborne particulate organic matter. Among other soft ionization methods, the FAPA ionization technique was developed in the last decade during the advent of ambient desorption/ionization mass spectrometry (ADI–MS). Based on a helium glow discharge plasma at atmospheric-pressure, excited helium species and primary reagent ions are generated which exit the discharge region through a capillary electrode, forming the so-called afterglow region where desorption and ionization of the analytes occurs. Commonly, fragmentation of the analytes during ionization is reported to occur only to a minimum extent, predominantly resulting in the formation of quasimolecular ions, i.e. [M+H]+ and [M–H]– in the positive and the negative ion mode, respectively. Thus, identification and detection of signals and their corresponding compounds is facilitated in the acquired mass spectra. The focus of the first part of this study lies on the application, characterization and assessment of FAPA–MS in the offline mode, i.e. desorption and ionization of the analytes from surfaces. Experiments in both positive and negative ion mode revealed ionization patterns for a variety of compound classes comprising alkanes, alcohols, aldehydes, ketones, carboxylic acids, organic peroxides, and alkaloids. Besides the always emphasized detection of quasimolecular ions, a broad range of signals for adducts and losses was found. Additionally, the capabilities and limitations of the technique were studied in three proof-of-principle applications. In general, the method showed to be best suited for polar analytes with high volatilities and low molecular weights, ideally containing nitrogen- and/or oxygen functionalities. However, for compounds with low vapor pressures, containing long carbon chains and/or high molecular weights, desorption and ionization is in direct competition with oxidation of the analytes, leading to the formation of adducts and oxidation products which impede a clear signal assignment in the acquired mass spectra. Nonetheless, FAPA–MS showed to be capable of detecting and identifying common limonene oxidation products in secondary OA (SOA) particles on a filter sample and, thus, is considered a suitable method for offline analysis of OA particles. In the second as well as the subsequent parts, FAPA–MS was applied online, i.e. for real time analysis of OA particles suspended in air. Therefore, the acronym AeroFAPA–MS (i.e. Aerosol FAPA–MS) was chosen to refer to this method. After optimization and characterization, the method was used to measure a range of model compounds and to evaluate typical ionization patterns in the positive and the negative ion mode. In addition, results from laboratory studies as well as from a field campaign in Central Europe (F–BEACh 2014) are presented and discussed. During the F–BEACh campaign AeroFAPA–MS was used in combination with complementary MS techniques, giving a comprehensive characterization of the sampled OA particles. For example, several common SOA marker compounds were identified in real time by MSn experiments, indicating that photochemically aged SOA particles were present during the campaign period. Moreover, AeroFAPA–MS was capable of detecting highly oxidized sulfur-containing compounds in the particle phase, presenting the first real-time measurements of this compound class. Further comparisons with data from other aerosol and gas-phase measurements suggest that both particulate sulfate as well as highly oxidized peroxyradicals in the gas phase might play a role during formation of these species. Besides applying AeroFAPA–MS for the analysis of aerosol particles, desorption processes of particles in the afterglow region were investigated in order to gain a more detailed understanding of the method. While during the previous measurements aerosol particles were pre-evaporated prior to AeroFAPA–MS analysis, in this part no external heat source was applied. Particle size distribution measurements before and after the AeroFAPA source revealed that only an interfacial layer of OA particles is desorbed and, thus, chemically characterized. For particles with initial diameters of 112 nm, desorption radii of 2.5–36.6 nm were found at discharge currents of 15–55 mA from these measurements. In addition, the method was applied for the analysis of laboratory-generated core-shell particles in a proof-of-principle study. As expected, predominantly compounds residing in the shell of the particles were desorbed and ionized with increasing probing depths, suggesting that AeroFAPA–MS might represent a promising technique for depth profiling of OA particles in future studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In contact shots, all the materials emerging from the muzzle (combustion gases, soot, powder grains, and metals from the primer) will be driven into the depth of the entrance wound and the following sections of the bullet track. The so-called "pocket" ("powder cavity") under the skin containing soot and gunpowder particles is regarded as a significant indicator of a contact entrance wound since one would expect that the quantity of GSR deposited along the bullet's path rapidly declines towards the exit hole. Nevertheless, experience has shown that soot, powder particles, and carboxyhemoglobin may be found not only in the initial part of the wound channel, but also far away from the entrance and even at the exit. In order to investigate the propagation of GSRs under standardized conditions, contact test shots were fired against composite models of pig skin and 25-cm-long gelatin blocks using 9-mm Luger pistol cartridges with two different primers (Sinoxid® and Sintox®). Subsequently, 1-cm-thick layers of the gelatin blocks were examined as to their primer element contents (lead, barium, and antimony as discharge residues of Sinoxid® as well as zinc and titanium from Sintox®) by means of X-ray fluorescence spectroscopy. As expected, the highest element concentrations were found in the initial parts of the bullet tracks, but also the distal sections contained detectable amounts of the respective primer elements. The same was true for amorphous soot and unburned/partly burned powder particles, which could be demonstrated even at the exit site. With the help of a high-speed motion camera it was shown that for a short time the temporary cavitation extends from the entrance to the exit thus facilitating the unlimited spread of discharge residues along the whole bullet path.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objectives: Previous research conducted in the late 1980s suggested that vehicle impacts following an initial barrier collision increase severe occupant injury risk. Now over 25years old, the data are no longer representative of the currently installed barriers or the present US vehicle fleet. The purpose of this study is to provide a present-day assessment of secondary collisions and to determine if current full-scale barrier crash testing criteria provide an indication of secondary collision risk for real-world barrier crashes. Methods: To characterize secondary collisions, 1,363 (596,331 weighted) real-world barrier midsection impacts selected from 13years (1997-2009) of in-depth crash data available through the National Automotive Sampling System (NASS) / Crashworthiness Data System (CDS) were analyzed. Scene diagram and available scene photographs were used to determine roadside and barrier specific variables unavailable in NASS/CDS. Binary logistic regression models were developed for second event occurrence and resulting driver injury. To investigate current secondary collision crash test criteria, 24 full-scale crash test reports were obtained for common non-proprietary US barriers, and the risk of secondary collisions was determined using recommended evaluation criteria from National Cooperative Highway Research Program (NCHRP) Report 350. Results: Secondary collisions were found to occur in approximately two thirds of crashes where a barrier is the first object struck. Barrier lateral stiffness, post-impact vehicle trajectory, vehicle type, and pre-impact tracking conditions were found to be statistically significant contributors to secondary event occurrence. The presence of a second event was found to increase the likelihood of a serious driver injury by a factor of 7 compared to cases with no second event present. The NCHRP Report 350 exit angle criterion was found to underestimate the risk of secondary collisions in real-world barrier crashes. Conclusions: Consistent with previous research, collisions following a barrier impact are not an infrequent event and substantially increase driver injury risk. The results suggest that using exit-angle based crash test criteria alone to assess secondary collision risk is not sufficient to predict second collision occurrence for real-world barrier crashes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Job burnout is linked to job outcomes in public accounting professionals (Fogarty et al., 2000; Jones et al., 2010; Jones et al., 2012). Although women and men have entered the profession in relatively equal numbers, there is a significantly lower percentage of women partners (AICPA, 2011). Extant research has not sufficiently explored how burnout may affect the genders distinctly and whether these differences may lend insight as to women’s choices to exit. A large participant group with a similar proportion of women (n=836) and men (n=845) allowed examination of the burnout construct on a more profound level than extant studies. The three dimensions of job burnout in women and men public accountants were analyzed, not only in total, but also by functional area and position level. Overall findings are that women report higher levels of reduced personal accomplishment and men report higher levels of depersonalization. In light of these findings, suggestions are made for firm and individual actions that may mitigate the intensity of burnout experienced by both women and men public accountants.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: The aim of this paper is to demonstrate that computed tomography (CT) and three-dimensional (3D) CT imaging techniques can be useful tools for evaluating gunshot wounds of the skull in forensic medicine. Three purposes can be achieved: (1) identifying and recognising the bullet entrance wound - and exit wound, if present; (2) recognising the bullet's intracranial course by studying damage to bone and brain tissue; (3) suggesting hypotheses as to the dynamics of the event. MATERIALS AND METHODS: Ten cadavers of people who died of a fatal head injury caused by a single gunshot were imaged with total-body CT prior to conventional autoptic examination. Three-dimensional-CT reconstructions were obtained with the volume-rendering technique, and data were analysed by two independent observers and compared with autopsy results. RESULTS: In our experience, CT analysis and volumetric reconstruction techniques allowed the identification of the bullet entrance and exit wounds and intracranial trajectory, as well as helping to formulate a hypothesis on the extracranial trajectory to corroborate circumstantial evidence. CONCLUSIONS: CT imaging techniques are excellent tools for addressing the most important questions of forensic medicine in the case of gunshot wounds of the skull, with results as good as (or sometimes better than) traditional autoptic methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

SETTING: Correctional settings and remand prisons. OBJECTIVE: To critically discuss calculations for epidemiological indicators of the tuberculosis (TB) burden in prisons and to provide recommendations to improve study comparability. METHODS: A hypothetical data set illustrates issues in determining incidence and prevalence. The appropriate calculation of the incidence rate is presented and problems arising from cross-sectional surveys are clarifi ed. RESULTS: Cases recognized during the fi rst 3 months should be classifi ed as prevalent at entry and excluded from any incidence rate calculation. The numerator for the incidence rate includes persons detected as having developed TB during a specifi ed period of time subsequent to the initial 3 months. The denominator is persontime at risk from 3 months onward to the end point (TB or end of the observation period). Preferably, entry time, exit time and event time are known for each inmate to determine person-time at risk. Failing that, an approximation consists of the sum of monthly head counts, excluding prevalent cases and those persons no longer at risk from both the numerator and the denominator. CONCLUSIONS: The varying durations of inmate incarceration in prisons pose challenges for quantifying the magnitude of the TB problem in the inmate population. Recommendations are made to measure incidence and prevalence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although the Monte Carlo (MC) method allows accurate dose calculation for proton radiotherapy, its usage is limited due to long computing time. In order to gain efficiency, a new macro MC (MMC) technique for proton dose calculations has been developed. The basic principle of the MMC transport is a local to global MC approach. The local simulations using GEANT4 consist of mono-energetic proton pencil beams impinging perpendicularly on slabs of different thicknesses and different materials (water, air, lung, adipose, muscle, spongiosa, cortical bone). During the local simulation multiple scattering, ionization as well as elastic and inelastic interactions have been taken into account and the physical characteristics such as lateral displacement, direction distributions and energy loss have been scored for primary and secondary particles. The scored data from appropriate slabs is then used for the stepwise transport of the protons in the MMC simulation while calculating the energy loss along the path between entrance and exit position. Additionally, based on local simulations the radiation transport of neutrons and the generated ions are included into the MMC simulations for the dose calculations. In order to validate the MMC transport, calculated dose distributions using the MMC transport and GEANT4 have been compared for different mono-energetic proton pencil beams impinging on different phantoms including homogeneous and inhomogeneous situations as well as on a patient CT scan. The agreement of calculated integral depth dose curves is better than 1% or 1 mm for all pencil beams and phantoms considered. For the dose profiles the agreement is within 1% or 1 mm in all phantoms for all energies and depths. The comparison of the dose distribution calculated using either GEANT4 or MMC in the patient also shows an agreement of within 1% or 1 mm. The efficiency of MMC is up to 200 times higher than for GEANT4. The very good level of agreement in the dose comparisons demonstrate that the newly developed MMC transport results in very accurate and efficient dose calculations for proton beams.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Brain functions, such as learning, orchestrating locomotion, memory recall, and processing information, all require glucose as a source of energy. During these functions, the glucose concentration decreases as the glucose is being consumed by brain cells. By measuring this drop in concentration, it is possible to determine which parts of the brain are used during specific functions and consequently, how much energy the brain requires to complete the function. One way to measure in vivo brain glucose levels is with a microdialysis probe. The drawback of this analytical procedure, as with many steadystate fluid flow systems, is that the probe fluid will not reach equilibrium with the brain fluid. Therefore, brain concentration is inferred by taking samples at multiple inlet glucose concentrations and finding a point of convergence. The goal of this thesis is to create a three-dimensional, time-dependent, finite element representation of the brainprobe system in COMSOL 4.2 that describes the diffusion and convection of glucose. Once validated with experimental results, this model can then be used to test parameters that experiments cannot access. When simulations were run using published values for physical constants (i.e. diffusivities, density and viscosity), the resulting glucose model concentrations were within the error of the experimental data. This verifies that the model is an accurate representation of the physical system. In addition to accurately describing the experimental brain-probe system, the model I created is able to show the validity of zero-net-flux for a given experiment. A useful discovery is that the slope of the zero-net-flux line is dependent on perfusate flow rate and diffusion coefficients, but it is independent of brain glucose concentrations. The model was simplified with the realization that the perfusate is at thermal equilibrium with the brain throughout the active region of the probe. This allowed for the assumption that all model parameters are temperature independent. The time to steady-state for the probe is approximately one minute. However, the signal degrades in the exit tubing due to Taylor dispersion, on the order of two minutes for two meters of tubing. Given an analytical instrument requiring a five μL aliquot, the smallest brain process measurable for this system is 13 minutes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Previous research conducted in the late 1980’s suggested that vehicle impacts following an initial barrier collision increase severe occupant injury risk. Now over twenty-five years old, the data used in the previous research is no longer representative of the currently installed barriers or US vehicle fleet. The purpose of this study is to provide a present-day assessment of secondary collisions and to determine if full-scale barrier crash testing criteria provide an indication of secondary collision risk for real-world barrier crashes. The analysis included 1,383 (596,331 weighted) real-world barrier midsection impacts selected from thirteen years (1997-2009) of in-depth crash data available through the National Automotive Sampling System (NASS) / Crashworthiness Data System (CDS). For each suitable case, the scene diagram and available scene photographs were used to determine roadside and barrier specific variables not available in NASS/CDS. Binary logistic regression models were developed for second event occurrence and resulting driver injury. Barrier lateral stiffness, post-impact vehicle trajectory, vehicle type, and pre-impact tracking conditions were found to be statistically significant contributors toward secondary event occurrence. The presence of a second event was found to increase the likelihood of a serious driver injury by a factor of seven compared to cases with no second event present. Twenty-four full-scale crash test reports were obtained for common non-proprietary US barriers, and the risk of secondary collisions was determined using recommended evaluation criteria from NCHRP Report 350. It was found that the NCHRP Report 350 exit angle criterion alone was not sufficient to predict second collision occurrence for real-world barrier crashes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sarco(endo)plasmic reticulum Ca2+-ATPase isoform 2 (SERCA2) pumps belong to the family of Ca2+-ATPases responsible for the maintenance of calcium in the endoplasmic reticulum. In epidermal keratinocytes, SERCA2-controlled calcium stores are involved in cell cycle exit and onset of terminal differentiation. Hence, their dysfunction was thought to provoke impaired keratinocyte cohesion and hampered terminal differentiation. Here, we assessed cultured keratinocytes and skin biopsies from a canine family with an inherited skin blistering disorder. Cells from lesional and phenotypically normal areas of one of these dogs revealed affected calcium homeostasis due to depleted SERCA2-gated stores. In phenotypically normal patient cells, this defect compromised upregulation of p21(WAF1) and delayed the exit from the cell cycle. Despite this abnormality it failed to impede the terminal differentiation process in the long term but instead coincided with enhanced apoptosis and appearance of chronic wounds, suggestive of secondary mutations. Collectively, these findings provide the first survey on phenotypic consequences of depleted SERCA-gated stores for epidermal homeostasis that explain how depleted SERCA2 calcium stores provoke focal lesions rather than generalized dermatoses, a phenotype highly reminiscent of the human genodermatosis Darier disease.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An appropriate model of recent human evolution is not only important to understand our own history, but it is necessary to disentangle the effects of demography and selection on genome diversity. Although most genetic data support the view that our species originated recently in Africa, it is still unclear if it completely replaced former members of the Homo genus, or if some interbreeding occurred during its range expansion. Several scenarios of modern human evolution have been proposed on the basis of molecular and paleontological data, but their likelihood has never been statistically assessed. Using DNA data from 50 nuclear loci sequenced in African, Asian and Native American samples, we show here by extensive simulations that a simple African replacement model with exponential growth has a higher probability (78%) as compared with alternative multiregional evolution or assimilation scenarios. A Bayesian analysis of the data under this best supported model points to an origin of our species approximately 141 thousand years ago (Kya), an exit out-of-Africa approximately 51 Kya, and a recent colonization of the Americas approximately 10.5 Kya. We also find that the African replacement model explains not only the shallow ancestry of mtDNA or Y-chromosomes but also the occurrence of deep lineages at some autosomal loci, which has been formerly interpreted as a sign of interbreeding with Homo erectus.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: The objective of this study was to investigate the feasibility, outcomes, and amount of small intestinal submucosa (SIS) material needed for embolization of jugular vein (JV) in a swine and sheep model. Our hypothesis was that SIS would cause vein occlusion. MATERIALS AND METHODS: The external JVs (EJV) in swine (n = 6) and JVs in sheep (n = 6) were occluded with SIS fan-folded compressed strips. After percutaneous puncture of the peripheral portion of the EJV or JV, a TIPS set was used to exit their lumen centrally through the skin. The SIS strips were delivered into the isolated venous segment with a pull-through technique via a 10-Fr sheath. Follow-up venograms were done immediately after placement and at the time of sacrifice at 1 or 3 months. Gross examinations focused on the EJV or JV and their surrounding structures. Specimens were evaluated by histology. RESULTS: SIS strip(s) placement was successful in all cases, with immediate vein occlusion seen in 23 of 24 veins (95.8%). All EJVs treated with two strips and all JVs treated with three or four strips remained closed on 1- and 3-month follow-up venograms. Two EJVs treated with one strip and one JV treated with two strips were partially patent on venograms at 1 and 3 months. There has been one skin inflammatory reaction. Necropsies revealed excluded EJV or JV segments with SIS incorporation into the vein wall. Histology demonstrated various stages of SIS remodeling with fibrocytes, fibroblasts, endothelial cells, capillaries, and inflammatory cells. CONCLUSION: We conclude that EJV and JV ablation with SIS strips using percutaneous exit catheterization is feasible and effective in animal models. Further exploration of SIS as vein ablation material is recommended.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study focuses on a specific engine, i.e., a dual-spool, separate-flow turbofan engine with an Interstage Turbine Burner (ITB). This conventional turbofan engine has been modified to include a secondary isobaric burner, i.e., ITB, in a transition duct between the high-pressure turbine and the low-pressure turbine. The preliminary design phase for this modified engine starts with the aerothermodynamics cycle analysis is consisting of parametric (i.e., on-design) and performance (i.e., off-design) cycle analyses. In parametric analysis, the modified engine performance parameters are evaluated and compared with baseline engine in terms of design limitation (maximum turbine inlet temperature), flight conditions (such as flight Mach condition, ambient temperature and pressure), and design choices (such as compressor pressure ratio, fan pressure ratio, fan bypass ratio etc.). A turbine cooling model is also included to account for the effect of cooling air on engine performance. The results from the on-design analysis confirmed the advantage of using ITB, i.e., higher specific thrust with small increases in thrust specific fuel consumption, less cooling air, and less NOx production, provided that the main burner exit temperature and ITB exit temperature are properly specified. It is also important to identify the critical ITB temperature, beyond which the ITB is turned off and has no advantage at all. With the encouraging results from parametric cycle analysis, a detailed performance cycle analysis of the identical engine is also conducted for steady-stateengine performance prediction. The results from off-design cycle analysis show that the ITB engine at full throttle setting has enhanced performance over baseline engine. Furthermore, ITB engine operating at partial throttle settings will exhibit higher thrust at lower specific fuel consumption and improved thermal efficiency over the baseline engine. A mission analysis is also presented to predict the fuel consumptions in certain mission phases. Excel macrocode, Visual Basic for Application, and Excel neuron cells are combined to facilitate Excel software to perform these cycle analyses. These user-friendly programs compute and plot the data sequentially without forcing users to open other types of post-processing programs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Single-screw extrusion is one of the widely used processing methods in plastics industry, which was the third largest manufacturing industry in the United States in 2007 [5]. In order to optimize the single-screw extrusion process, tremendous efforts have been devoted for development of accurate models in the last fifty years, especially for polymer melting in screw extruders. This has led to a good qualitative understanding of the melting process; however, quantitative predictions of melting from various models often have a large error in comparison to the experimental data. Thus, even nowadays, process parameters and the geometry of the extruder channel for the single-screw extrusion are determined by trial and error. Since new polymers are developed frequently, finding the optimum parameters to extrude these polymers by trial and error is costly and time consuming. In order to reduce the time and experimental work required for optimizing the process parameters and the geometry of the extruder channel for a given polymer, the main goal of this research was to perform a coordinated experimental and numerical investigation of melting in screw extrusion. In this work, a full three-dimensional finite element simulation of the two-phase flow in the melting and metering zones of a single-screw extruder was performed by solving the conservation equations for mass, momentum, and energy. The only attempt for such a three-dimensional simulation of melting in screw extruder was more than twenty years back. However, that work had only a limited success because of the capability of computers and mathematical algorithms available at that time. The dramatic improvement of computational power and mathematical knowledge now make it possible to run full 3-D simulations of two-phase flow in single-screw extruders on a desktop PC. In order to verify the numerical predictions from the full 3-D simulations of two-phase flow in single-screw extruders, a detailed experimental study was performed. This experimental study included Maddock screw-freezing experiments, Screw Simulator experiments and material characterization experiments. Maddock screw-freezing experiments were performed in order to visualize the melting profile along the single-screw extruder channel with different screw geometry configurations. These melting profiles were compared with the simulation results. Screw Simulator experiments were performed to collect the shear stress and melting flux data for various polymers. Cone and plate viscometer experiments were performed to obtain the shear viscosity data which is needed in the simulations. An optimization code was developed to optimize two screw geometry parameters, namely, screw lead (pitch) and depth in the metering section of a single-screw extruder, such that the output rate of the extruder was maximized without exceeding the maximum temperature value specified at the exit of the extruder. This optimization code used a mesh partitioning technique in order to obtain the flow domain. The simulations in this flow domain was performed using the code developed to simulate the two-phase flow in single-screw extruders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An extrusion die is used to continuously produce parts with a constant cross section; such as sheets, pipes, tire components and more complex shapes such as window seals. The die is fed by a screw extruder when polymers are used. The extruder melts, mixes and pressures the material by the rotation of either a single or double screw. The polymer can then be continuously forced through the die producing a long part in the shape of the die outlet. The extruded section is then cut to the desired length. Generally, the primary target of a well designed die is to produce a uniform outlet velocity without excessively raising the pressure required to extrude the polymer through the die. Other properties such as temperature uniformity and residence time are also important but are not directly considered in this work. Designing dies for optimal outlet velocity variation using simple analytical equations are feasible for basic die geometries or simple channels. Due to the complexity of die geometry and of polymer material properties design of complex dies by analytical methods is difficult. For complex dies iterative methods must be used to optimize dies. An automated iterative method is desired for die optimization. To automate the design and optimization of an extrusion die two issues must be dealt with. The first is how to generate a new mesh for each iteration. In this work, this is approached by modifying a Parasolid file that describes a CAD part. This file is then used in a commercial meshing software. Skewing the initial mesh to produce a new geometry was also employed as a second option. The second issue is an optimization problem with the presence of noise stemming from variations in the mesh and cumulative truncation errors. In this work a simplex method and a modified trust region method were employed for automated optimization of die geometries. For the trust region a discreet derivative and a BFGS Hessian approximation were used. To deal with the noise in the function the trust region method was modified to automatically adjust the discreet derivative step size and the trust region based on changes in noise and function contour. Generally uniformity of velocity at exit of the extrusion die can be improved by increasing resistance across the die but this is limited by the pressure capabilities of the extruder. In optimization, a penalty factor that increases exponentially from the pressure limit is applied. This penalty can be applied in two different ways; the first only to the designs which exceed the pressure limit, the second to both designs above and below the pressure limit. Both of these methods were tested and compared in this work.