918 resultados para Time Line
Resumo:
Sphingosine 1-phosphate (S1P) is a potent mitogenic signal generated from sphingosine by the action of sphingosine kinases (SKs). In this study, we show that in the human arterial endothelial cell line EA.hy 926 histamine induces a time-dependent upregulation of the SK-1 mRNA and protein expression which is followed by increased SK-1 activity. A similar upregulation of SK-1 is also observed with the direct protein kinase C activator 12-O-tetradecanoylphorbol-13-acetate (TPA). In contrast, SK-2 activity is not affected by neither histamine nor TPA. The increased SK-1 protein expression is due to stimulated de novo synthesis since cycloheximide inhibited the delayed SK-1 protein upregulation. Moreover, the increased SK-1 mRNA expression results from an increased promoter activation by histamine and TPA. In mechanistic terms, the transcriptional upregulation of SK-1 is dependent on PKC and the extracellular signal-regulated protein kinase (ERK) cascade since staurosporine and the MEK inhibitor U0126 abolish the TPA-induced SK-1 induction. Furthermore, the histamine effect is abolished by the H1-receptor antagonist diphenhydramine, but not by the H2-receptor antagonist cimetidine. Parallel to the induction of SK-1, histamine and TPA stimulate an increased migration of endothelial cells, which is prevented by depletion of the SK-1 by small interfering RNA (siRNA). To appoint this specific cell response to a specific PKC isoenzyme, siRNA of PKC-alpha, -delta, and -epsilon were used to selectively downregulate the respective isoforms. Interestingly, only depletion of PKC-alpha leads to a complete loss of TPA- and histamine-triggered SK-1 induction and cell migration. In summary, these data show that PKC-alpha activation in endothelial cells by histamine-activated H1-receptors, or by direct PKC activators leads to a sustained upregulation of the SK-1 protein expression and activity which, in turn, is critically involved in the mechanism of endothelial cell migration.
Resumo:
The AEGISS (Ascertainment and Enhancement of Gastrointestinal Infection Surveillance and Statistics) project aims to use spatio-temporal statistical methods to identify anomalies in the space-time distribution of non-specific, gastrointestinal infections in the UK, using the Southampton area in southern England as a test-case. In this paper, we use the AEGISS project to illustrate how spatio-temporal point process methodology can be used in the development of a rapid-response, spatial surveillance system. Current surveillance of gastroenteric disease in the UK relies on general practitioners reporting cases of suspected food-poisoning through a statutory notification scheme, voluntary laboratory reports of the isolation of gastrointestinal pathogens and standard reports of general outbreaks of infectious intestinal disease by public health and environmental health authorities. However, most statutory notifications are made only after a laboratory reports the isolation of a gastrointestinal pathogen. As a result, detection is delayed and the ability to react to an emerging outbreak is reduced. For more detailed discussion, see Diggle et al. (2003). A new and potentially valuable source of data on the incidence of non-specific gastro-enteric infections in the UK is NHS Direct, a 24-hour phone-in clinical advice service. NHS Direct data are less likely than reports by general practitioners to suffer from spatially and temporally localized inconsistencies in reporting rates. Also, reporting delays by patients are likely to be reduced, as no appointments are needed. Against this, NHS Direct data sacrifice specificity. Each call to NHS Direct is classified only according to the general pattern of reported symptoms (Cooper et al, 2003). The current paper focuses on the use of spatio-temporal statistical analysis for early detection of unexplained variation in the spatio-temporal incidence of non-specific gastroenteric symptoms, as reported to NHS Direct. Section 2 describes our statistical formulation of this problem, the nature of the available data and our approach to predictive inference. Section 3 describes the stochastic model. Section 4 gives the results of fitting the model to NHS Direct data. Section 5 shows how the model is used for spatio-temporal prediction. The paper concludes with a short discussion.
Resumo:
BACKGROUND: Gefitinib is active in patients with pretreated non-small-cell lung cancer (NSCLC). We evaluated the activity and toxicity of gefitinib first-line treatment in advanced NSCLC followed by chemotherapy at disease progression. PATIENTS AND METHODS: In all, 63 patients with chemotherapy-naive stage IIIB/IV NSCLC received gefitinib 250 mg/day. At disease progression, gefitinib was replaced by cisplatin 80 mg/m(2) on day 1 and gemcitabine 1250 mg/m(2) on days 1, 8 for up to six 3-week cycles. Primary end point was the disease stabilization rate (DSR) after 12 weeks of gefitinib. RESULTS: After 12 weeks of gefitinib, the DSR was 24% and the response rate (RR) was 8%. Median time to progression (TtP) was 2.5 months and median overall survival (OS) 11.5 months. Never smokers (n = 9) had a DSR of 56% and a median OS of 20.2 months; patients with epidermal growth factor receptor (EGFR) mutation (n = 4) had a DSR of 75% and the median OS was not reached after the follow-up of 21.6 months. In all, 41 patients received chemotherapy with an overall RR of 34%, DSR of 71% and median TtP of 6.7 months. CONCLUSIONS: First-line gefitinib monotherapy led to a DSR of 24% at 12 weeks in an unselected patients population. Never smokers and patients with EGFR mutations tend to have a better outcome; hence, further trials in selected patients are warranted.
Resumo:
Early allogeneic hematopoietic stem cell transplantation (HSCT) has been proposed as primary treatment modality for patients with chronic myeloid leukemia (CML). This concept has been challenged by transplantation mortality and improved drug therapy. In a randomized study, primary HSCT and best available drug treatment (IFN based) were compared in newly diagnosed chronic phase CML patients. Assignment to treatment strategy was by genetic randomization according to availability of a matched related donor. Evaluation followed the intention-to-treat principle. Six hundred and twenty one patients with chronic phase CML were stratified for eligibility for HSCT. Three hundred and fifty four patients (62% male; median age, 40 years; range, 11-59 years) were eligible and randomized. One hundred and thirty five patients (38%) had a matched related donor, of whom 123 (91%) received a transplant within a median of 10 months (range, 2-106 months) from diagnosis. Two hundred and nineteen patients (62%) had no related donor and received best available drug treatment. With an observation time up to 11.2 years (median, 8.9 years), survival was superior for patients with drug treatment (P = .049), superiority being most pronounced in low-risk patients (P = .032). The general recommendation of HSCT as first-line treatment option in chronic phase CML can no longer be maintained. It should be replaced by a trial with modern drug treatment first.
Resumo:
BACKGROUND: Knowledge of the number of recent HIV infections is important for epidemiologic surveillance. Over the past decade approaches have been developed to estimate this number by testing HIV-seropositive specimens with assays that discriminate the lower concentration and avidity of HIV antibodies in early infection. We have investigated whether this "recency" information can also be gained from an HIV confirmatory assay. METHODS AND FINDINGS: The ability of a line immunoassay (INNO-LIA HIV I/II Score, Innogenetics) to distinguish recent from older HIV-1 infection was evaluated in comparison with the Calypte HIV-1 BED Incidence enzyme immunoassay (BED-EIA). Both tests were conducted prospectively in all HIV infections newly diagnosed in Switzerland from July 2005 to June 2006. Clinical and laboratory information indicative of recent or older infection was obtained from physicians at the time of HIV diagnosis and used as the reference standard. BED-EIA and various recency algorithms utilizing the antibody reaction to INNO-LIA's five HIV-1 antigen bands were evaluated by logistic regression analysis. A total of 765 HIV-1 infections, 748 (97.8%) with complete test results, were newly diagnosed during the study. A negative or indeterminate HIV antibody assay at diagnosis, symptoms of primary HIV infection, or a negative HIV test during the past 12 mo classified 195 infections (26.1%) as recent (< or = 12 mo). Symptoms of CDC stages B or C classified 161 infections as older (21.5%), and 392 patients with no symptoms remained unclassified. BED-EIA ruled 65% of the 195 recent infections as recent and 80% of the 161 older infections as older. Two INNO-LIA algorithms showed 50% and 40% sensitivity combined with 95% and 99% specificity, respectively. Estimation of recent infection in the entire study population, based on actual results of the three tests and adjusted for a test's sensitivity and specificity, yielded 37% for BED-EIA compared to 35% and 33% for the two INNO-LIA algorithms. Window-based estimation with BED-EIA yielded 41% (95% confidence interval 36%-46%). CONCLUSIONS: Recency information can be extracted from INNO-LIA-based confirmatory testing at no additional costs. This method should improve epidemiologic surveillance in countries that routinely use INNO-LIA for HIV confirmation.
Resumo:
BACKGROUND: To determine the activity and tolerability of adding cetuximab to the oxaliplatin and capecitabine (XELOX) combination in first-line treatment of metastatic colorectal cancer (MCC). PATIENTS AND METHODS: In a multicenter two-arm phase II trial, patients were randomized to receive oxaliplatin 130 mg/m(2) on day 1 and capecitabine 1000 mg/m(2) twice daily on days 1-14 every 3 weeks alone or in combination with standard dose cetuximab. Treatment was limited to a maximum of six cycles. RESULTS: Seventy-four patients with good performance status entered the trial. Objective partial response rates after external review and radiological confirmation were 14% and 41% in the XELOX and in the XELOX + Cetuximab arm, respectively. Stable disease has been observed in 62% and 35% of the patients, with 76% disease control in both arms. Cetuximab led to skin rash in 65% of the patients. The median overall survival was 16.5 months for arm A and 20.5 months for arm B. The median time to progression was 5.8 months for arm A and 7.2 months for arm B. CONCLUSION: Differences in response rates between the treatment arms indicate that cetuximab may improve outcome with XELOX. The correct place of the cetuximab, oxaliplatin and fluoropyrimidine combinations in first-line treatment of MCC has to be assessed in phase III trials.
Resumo:
BACKGROUND: We evaluated previously established regimens of capecitabine plus vinorelbine in older patients with advanced breast cancer stratified for presence versus absence of bone metastases. PATIENTS AND METHODS: Patients > or =65 years who had received no prior chemotherapy for advanced breast cancer received up to six 21-day cycles of vinorelbine 20 mg/m(2) i.v. on days 1 + 8 with oral capecitabine on days 1-14 (1,000 vs. 1,250 mg/m(2) daily in patients with vs. without bone involvement). RESULTS: Median age was 72 years in patients with bone metastases (n = 47) and 75 years in patients without bone metastases (n = 23). Response rates were 43% (95% confidence interval, CI, 28.3-58.8) and 57% (95% CI = 34.5-76.8), respectively. Median time to progression was 4.3 (95% CI = 3.5-6.0 months) and 7.0 months (CI = 4.1-8.3), respectively. Neutropenia was the most common toxicity, with grade 3/4 occurring in 43 and 39%, respectively. Pulmonary embolism was seen in 5 and grade 3 thrombosis in 3 patients. Other toxicities were mild to moderate. CONCLUSIONS: These regimens of capecitabine and vinorelbine are active and well tolerated in patients with advanced breast cancer > or =65 years. Response rates were comparable to published results. The lower capecitabine doses appeared appropriate given the advanced age, bone involvement and prior radiotherapy.
Resumo:
Target localization has a wide range of military and civilian applications in wireless mobile networks. Examples include battle-field surveillance, emergency 911 (E911), traffc alert, habitat monitoring, resource allocation, routing, and disaster mitigation. Basic localization techniques include time-of-arrival (TOA), direction-of-arrival (DOA) and received-signal strength (RSS) estimation. Techniques that are proposed based on TOA and DOA are very sensitive to the availability of Line-of-sight (LOS) which is the direct path between the transmitter and the receiver. If LOS is not available, TOA and DOA estimation errors create a large localization error. In order to reduce NLOS localization error, NLOS identifcation, mitigation, and localization techniques have been proposed. This research investigates NLOS identifcation for multiple antennas radio systems. The techniques proposed in the literature mainly use one antenna element to enable NLOS identifcation. When a single antenna is utilized, limited features of the wireless channel can be exploited to identify NLOS situations. However, in DOA-based wireless localization systems, multiple antenna elements are available. In addition, multiple antenna technology has been adopted in many widely used wireless systems such as wireless LAN 802.11n and WiMAX 802.16e which are good candidates for localization based services. In this work, the potential of spatial channel information for high performance NLOS identifcation is investigated. Considering narrowband multiple antenna wireless systems, two xvNLOS identifcation techniques are proposed. Here, the implementation of spatial correlation of channel coeffcients across antenna elements as a metric for NLOS identifcation is proposed. In order to obtain the spatial correlation, a new multi-input multi-output (MIMO) channel model based on rough surface theory is proposed. This model can be used to compute the spatial correlation between the antenna pair separated by any distance. In addition, a new NLOS identifcation technique that exploits the statistics of phase difference across two antenna elements is proposed. This technique assumes the phases received across two antenna elements are uncorrelated. This assumption is validated based on the well-known circular and elliptic scattering models. Next, it is proved that the channel Rician K-factor is a function of the phase difference variance. Exploiting Rician K-factor, techniques to identify NLOS scenarios are proposed. Considering wideband multiple antenna wireless systems which use MIMO-orthogonal frequency division multiplexing (OFDM) signaling, space-time-frequency channel correlation is exploited to attain NLOS identifcation in time-varying, frequency-selective and spaceselective radio channels. Novel NLOS identi?cation measures based on space, time and frequency channel correlation are proposed and their performances are evaluated. These measures represent a better NLOS identifcation performance compared to those that only use space, time or frequency.
Resumo:
Prediction of radiated fields from transmission lines has not previously been studied from a panoptical power system perspective. The application of BPL technologies to overhead transmission lines would benefit greatly from an ability to simulate real power system environments, not limited to the transmission lines themselves. Presently circuitbased transmission line models used by EMTP-type programs utilize Carson’s formula for a waveguide parallel to an interface. This formula is not valid for calculations at high frequencies, considering effects of earth return currents. This thesis explains the challenges of developing such improved models, explores an approach to combining circuit-based and electromagnetics modeling to predict radiated fields from transmission lines, exposes inadequacies of simulation tools, and suggests methods of extending the validity of transmission line models into very high frequency ranges. Electromagnetics programs are commonly used to study radiated fields from transmission lines. However, an approach is proposed here which is also able to incorporate the components of a power system through the combined use of EMTP-type models. Carson’s formulas address the series impedance of electrical conductors above and parallel to the earth. These equations have been analyzed to show their inherent assumptions and what the implications are. Additionally, the lack of validity into higher frequencies has been demonstrated, showing the need to replace Carson’s formulas for these types of studies. This body of work leads to several conclusions about the relatively new study of BPL. Foremost, there is a gap in modeling capabilities which has been bridged through integration of circuit-based and electromagnetics modeling, allowing more realistic prediction of BPL performance and radiated fields. The proposed approach is limited in its scope of validity due to the formulas used by EMTP-type software. To extend the range of validity, a new set of equations must be identified and implemented in the approach. Several potential methods of implementation have been explored. Though an appropriate set of equations has not yet been identified, further research in this area will benefit from a clear depiction of the next important steps and how they can be accomplished. Prediction of radiated fields from transmission lines has not previously been studied from a panoptical power system perspective. The application of BPL technologies to overhead transmission lines would benefit greatly from an ability to simulate real power system environments, not limited to the transmission lines themselves. Presently circuitbased transmission line models used by EMTP-type programs utilize Carson’s formula for a waveguide parallel to an interface. This formula is not valid for calculations at high frequencies, considering effects of earth return currents. This thesis explains the challenges of developing such improved models, explores an approach to combining circuit-based and electromagnetics modeling to predict radiated fields from transmission lines, exposes inadequacies of simulation tools, and suggests methods of extending the validity of transmission line models into very high frequency ranges. Electromagnetics programs are commonly used to study radiated fields from transmission lines. However, an approach is proposed here which is also able to incorporate the components of a power system through the combined use of EMTP-type models. Carson’s formulas address the series impedance of electrical conductors above and parallel to the earth. These equations have been analyzed to show their inherent assumptions and what the implications are. Additionally, the lack of validity into higher frequencies has been demonstrated, showing the need to replace Carson’s formulas for these types of studies. This body of work leads to several conclusions about the relatively new study of BPL. Foremost, there is a gap in modeling capabilities which has been bridged through integration of circuit-based and electromagnetics modeling, allowing more realistic prediction of BPL performance and radiated fields. The proposed approach is limited in its scope of validity due to the formulas used by EMTP-type software. To extend the range of validity, a new set of equations must be identified and implemented in the approach. Several potential methods of implementation have been explored. Though an appropriate set of equations has not yet been identified, further research in this area will benefit from a clear depiction of the next important steps and how they can be accomplished.
Resumo:
Satellite measurement validations, climate models, atmospheric radiative transfer models and cloud models, all depend on accurate measurements of cloud particle size distributions, number densities, spatial distributions, and other parameters relevant to cloud microphysical processes. And many airborne instruments designed to measure size distributions and concentrations of cloud particles have large uncertainties in measuring number densities and size distributions of small ice crystals. HOLODEC (Holographic Detector for Clouds) is a new instrument that does not have many of these uncertainties and makes possible measurements that other probes have never made. The advantages of HOLODEC are inherent to the holographic method. In this dissertation, I describe HOLODEC, its in-situ measurements of cloud particles, and the results of its test flights. I present a hologram reconstruction algorithm that has a sample spacing that does not vary with reconstruction distance. This reconstruction algorithm accurately reconstructs the field to all distances inside a typical holographic measurement volume as proven by comparison with analytical solutions to the Huygens-Fresnel diffraction integral. It is fast to compute, and has diffraction limited resolution. Further, described herein is an algorithm that can find the position along the optical axis of small particles as well as large complex-shaped particles. I explain an implementation of these algorithms that is an efficient, robust, automated program that allows us to process holograms on a computer cluster in a reasonable time. I show size distributions and number densities of cloud particles, and show that they are within the uncertainty of independent measurements made with another measurement method. The feasibility of another cloud particle instrument that has advantages over new standard instruments is proven. These advantages include a unique ability to detect shattered particles using three-dimensional positions, and a sample volume size that does not vary with particle size or airspeed. It also is able to yield two-dimensional particle profiles using the same measurements.
Resumo:
Michigan copper mining companies owned and rented more than 3,000 houses along the Keweenaw Peninsula at the time of the 1913-14 copper strike. The provision of company-constructed housing in mining districts has drawn a wide range of inquiry. Mining historians, community planners, architectural historians, and academics interested in the immigrant experience have identified miners' housing as intriguing examples of corporate paternalism, social planning, vernacular adaptation and ethnic segregation. Michigan's Copper Country retains many examples of such housing and recent research has shown that the Michigan copper mining companies championed the use of housing as a non-wage employment benefit. This paper will investigate the increasingly important role of occupancy and control of company housing during the strike. Illustrated with images collected during the strike by the fledgling U.S. Department of Labor, the presentation explores the history of company housing in the Copper Country, its part in a larger system of corporate welfare, and how the threat of evictions may have turned the tide of strike.
Resumo:
The distribution processes of chlorin e6 (CE) and monoaspartyl-chlorin e6 (MACE) between the outer and inner phospholipid monolayers of 1,2-dioleoyl-phosphatidylcholine (DOPC) vesicles were monitored by 1H NMR spectroscopy through analysis of chemical shifts and line widths of the DOPC vesicle resonances. Chlorin adsorption to the outer vesicle monolayer induced changes in the DOPC 1H NMR spectrum. Most pronounced was a split of the N-methyl choline resonance, allowing for separate analysis of inner and outer vesicle layers. Transbilayer distribution of the chlorin compounds was indicated by time-dependent characteristic spectral changes of the DOPC resonances. Kinetic parameters for the flip-flop processes, that is, half-lives and rate constants, were obtained from the experimental data points. In comparison to CE, MACE transbilayer movement was significantly reduced, with MACE remaining more or less attached to the outer membrane layer. The distribution coefficients for CE and MACE between the vesicular and aqueous phase were determined. Both CE and MACE exhibited a high affinity for the vesicular phase. For CE, a positive correlation was found between transfer rate and increasing molar ratio CE/DOPC. Enhanced membrane rigidity induced by increasing amounts of cholesterol into the model membrane was accompanied by a decrease of CE flip-flop rates across the membrane. The present study shows that the movement of porphyrins across membranes can efficiently be investigated by 1H NMR spectroscopy and that small changes in porphyrin structure can have large effects on membrane kinetics.
Resumo:
BACKGROUND: In high-income countries, viral load is routinely measured to detect failure of antiretroviral therapy (ART) and guide switching to second-line ART. Viral load monitoring is not generally available in resource-limited settings. We examined switching from nonnucleoside reverse transcriptase inhibitor (NNRTI)-based first-line regimens to protease inhibitor-based regimens in Africa, South America and Asia. DESIGN AND METHODS: Multicohort study of 17 ART programmes. All sites monitored CD4 cell count and had access to second-line ART and 10 sites monitored viral load. We compared times to switching, CD4 cell counts at switching and obtained adjusted hazard ratios for switching (aHRs) with 95% confidence intervals (CIs) from random-effects Weibull models. RESULTS: A total of 20 113 patients, including 6369 (31.7%) patients from 10 programmes with access to viral load monitoring, were analysed; 576 patients (2.9%) switched. Low CD4 cell counts at ART initiation were associated with switching in all programmes. Median time to switching was 16.3 months [interquartile range (IQR) 10.1-26.6] in programmes with viral load monitoring and 21.8 months (IQR 14.0-21.8) in programmes without viral load monitoring (P < 0.001). Median CD4 cell counts at switching were 161 cells/microl (IQR 77-265) in programmes with viral load monitoring and 102 cells/microl (44-181) in programmes without viral load monitoring (P < 0.001). Switching was more common in programmes with viral load monitoring during months 7-18 after starting ART (aHR 1.38; 95% CI 0.97-1.98), similar during months 19-30 (aHR 0.97; 95% CI 0.58-1.60) and less common during months 31-42 (aHR 0.29; 95% CI 0.11-0.79). CONCLUSION: In resource-limited settings, switching to second-line regimens tends to occur earlier and at higher CD4 cell counts in ART programmes with viral load monitoring compared with programmes without viral load monitoring.
Resumo:
Quantitative reverse transcriptase real-time PCR (QRT-PCR) is a robust method to quantitate RNA abundance. The procedure is highly sensitive and reproducible as long as the initial RNA is intact. However, breaks in the RNA due to chemical or enzymatic cleavage may reduce the number of RNA molecules that contain intact amplicons. As a consequence, the number of molecules available for amplification decreases. We determined the relation between RNA fragmentation and threshold values (Ct values) in subsequent QRT-PCR for four genes in an experimental model of intact and partially hydrolyzed RNA derived from a cell line and we describe the relation between RNA integrity, amplicon size and Ct values in this biologically homogenous system. We demonstrate that degradation-related shifts of Ct values can be compensated by calculating delta Ct values between test genes and the mean values of several control genes. These delta Ct values are less sensitive to fragmentation of the RNA and are unaffected by varying amounts of input RNA. The feasibility of the procedure was demonstrated by comparing Ct values from a larger panel of genes in intact and in partially degraded RNA. We compared Ct values from intact RNA derived from well-preserved tumor material and from fragmented RNA derived from formalin-fixed, paraffin-embedded (FFPE) samples of the same tumors. We demonstrate that the relative abundance of gene expression can be based on FFPE material even when the amount of RNA in the sample and the extent of fragmentation are not known.
Resumo:
Ein auf Basis von Prozessdaten kalibriertes Viskositätsmodell wird vorgeschlagen und zur Vorhersage der Viskosität einer Polyamid 12 (PA12) Kunststoffschmelze als Funktion von Zeit, Temperatur und Schergeschwindigkeit angewandt. Im ersten Schritt wurde das Viskositätsmodell aus experimentellen Daten abgeleitet. Es beruht hauptsächlich auf dem drei-parametrigen Ansatz von Carreau, wobei zwei zusätzliche Verschiebungsfaktoren eingesetzt werden. Die Temperaturabhängigkeit der Viskosität wird mithilfe des Verschiebungsfaktors aT von Arrhenius berücksichtigt. Ein weiterer Verschiebungsfaktor aSC (Structural Change) wird eingeführt, der die Strukturänderung von PA12 als Folge der Prozessbedingungen beim Lasersintern beschreibt. Beobachtet wurde die Strukturänderung in Form einer signifikanten Viskositätserhöhung. Es wurde geschlussfolgert, dass diese Viskositätserhöhung auf einen Molmassenaufbau zurückzuführen ist und als Nachkondensation verstanden werden kann. Abhängig von den Zeit- und Temperaturbedingungen wurde festgestellt, dass die Viskosität als Folge des Molmassenaufbaus exponentiell gegen eine irreversible Grenze strebt. Die Geschwindigkeit dieser Nachkondensation ist zeit- und temperaturabhängig. Es wird angenommen, dass die Pulverbetttemperatur einen Molmassenaufbau verursacht und es damit zur Kettenverlängerung kommt. Dieser fortschreitende Prozess der zunehmenden Kettenlängen setzt molekulare Beweglichkeit herab und unterbindet die weitere Nachkondensation. Der Verschiebungsfaktor aSC drückt diese physikalisch-chemische Modellvorstellung aus und beinhaltet zwei zusätzliche Parameter. Der Parameter aSC,UL entspricht der oberen Viskositätsgrenze, wohingegen k0 die Strukturänderungsrate angibt. Es wurde weiterhin festgestellt, dass es folglich nützlich ist zwischen einer Fließaktivierungsenergie und einer Strukturänderungsaktivierungsenergie für die Berechnung von aT und aSC zu unterscheiden. Die Optimierung der Modellparameter erfolgte mithilfe eines genetischen Algorithmus. Zwischen berechneten und gemessenen Viskositäten wurde eine gute Übereinstimmung gefunden, so dass das Viskositätsmodell in der Lage ist die Viskosität einer PA12 Kunststoffschmelze als Folge eines kombinierten Lasersinter Zeit- und Temperatureinflusses vorherzusagen. Das Modell wurde im zweiten Schritt angewandt, um die Viskosität während des Lasersinter-Prozesses in Abhängigkeit von der Energiedichte zu berechnen. Hierzu wurden Prozessdaten, wie Schmelzetemperatur und Belichtungszeit benutzt, die mithilfe einer High-Speed Thermografiekamera on-line gemessen wurden. Abschließend wurde der Einfluss der Strukturänderung auf das Viskositätsniveau im Prozess aufgezeigt.