971 resultados para Dynamic mass transport
Resumo:
Krebs stellt eine der häufigsten Todesursachen in Europa dar. Grundlage für eine langfristige Verbesserung des Behandlungserfolgs ist ein molekulares Verständnis der Mechanismen, welche zur Krankheitsentstehung beitragen. In diesem Zusammenhang spielen Proteasen nicht nur eine wichtige Rolle, sondern stellen auch bei vielerlei Erkrankungen bereits anerkannte Zielstrukturen derzeitiger Behandlungsstrategien dar. Die Protease Threonin Aspartase 1 (Taspase1) spielt eine entscheidende Rolle bei der Aktivierung von Mixed Lineage Leukemia (MLL)-Fusionsproteinen und somit bei der Entstehung aggressiver Leukämien. Aktuelle Arbeiten unterstreichen zudem die onkologische Relevanz von Taspase1 auch für solide Tumore. Die Kenntnisse über die molekularen Mechanismen und Signalnetzwerke, welche für die (patho)biologischen Funktionen von Taspase1 verantwortlich sind, stellen sich allerdings noch immer als bruchstückhaft dar. Um diese bestehenden Wissenslücken zu schließen, sollten im Rahmen der Arbeit neue Strategien zur Inhibition von Taspase1 erarbeitet und bewertet werden. Zusätzlich sollten neue Einsichten in evolutionären Funktionsmechanismen sowie eine weitergehende Feinregulation von Taspase1 erlangt werden. Zum einen erlaubte die Etablierung und Anwendung eines zellbasierten Taspase1-Testsystem, chemische Verbindungen auf deren inhibitorische Aktivität zu testen. Überraschenderweise belegten solch zelluläre Analysen in Kombination mit in silico-Modellierungen eindeutig, dass ein in der Literatur postulierter Inhibitor in lebenden Tumorzellen keine spezifische Wirksamkeit gegenüber Taspase1 zeigte. Als mögliche Alternative wurden darüber hinaus Ansätze zur genetischen Inhibition evaluiert. Obwohl publizierte Studien Taspase1 als ααββ-Heterodimer beschreiben, konnte durch Überexpression katalytisch inaktiver Mutanten kein trans-dominant negativer Effekt und damit auch keine Inhibition des wildtypischen Enzyms beobachtet werden. Weiterführende zellbiologische und biochemische Analysen belegten erstmalig, dass Taspase1 in lebenden Zellen in der Tat hauptsächlich als Monomer und nicht als Dimer vorliegt. Die Identifizierung evolutionär konservierter bzw. divergenter Funktionsmechanismen lieferte bereits in der Vergangenheit wichtige Hinweise zur Inhibition verschiedenster krebsrelevanter Proteine. Da in Drosophila melanogaster die Existenz und funktionelle Konservierung eines Taspase1-Homologs postuliert wurde, wurde in einem weiteren Teil der vorliegenden Arbeit die evolutionäre Entwicklung der Drosophila Taspase1 (dTaspase1) untersucht. Obwohl Taspase1 als eine evolutionär stark konservierte Protease gilt, konnten wichtige Unterschiede zwischen beiden Orthologen festgestellt werden. Neben einem konservierten autokatalytischen Aktivierungsmechanismus besitzt dTaspase1 verglichen mit dem humanen Enzym eine flexiblere Substraterkennungs-sequenz, was zu einer Vergrößerung des Drosophila-spezifischen Degradoms führt. Diese Ergebnisse zeigen des Weiteren, dass zur Definition und Vorhersage des Degradoms nicht nur proteomische sondern auch zellbiologische und bioinformatische Untersuchungen geeignet und notwendig sind. Interessanterweise ist die differentielle Regulation der dTaspase1-Aktivität zudem auf eine veränderte intrazelluläre Lokalisation zurückzuführen. Das Fehlen von in Vertebraten hochkonservierten aktiven Kernimport- und nukleolären Lokalisationssignalen erklärt, weshalb dTaspase1 weniger effizient nukleäre Substrate prozessiert. Somit scheint die für die humane Taspase1 beschriebene Regulation von Lokalisation und Aktivität über eine Importin-α/NPM1-Achse erst im Laufe der Entwicklung der Vertebraten entstanden zu sein. Es konnte also ein bislang unbekanntes evolutionäres Prinzip identifiziert werden, über welches eine Protease einen Transport- bzw. Lokalisations-basierten Mechanismus zur Feinregulation ihrer Aktivität „von der Fliege zum Menschen“ nutzt. Eine weitere Möglichkeit zur dynamischen Funktionsmodulation bieten post-translationale Modifikationen (PTMs) der Proteinsequenz, zu welcher Phosphorylierung und Acetylierung zählen. Interessanterweise konnte für die humane Taspase1 über den Einsatz unabhängiger Methoden einschließlich massenspektrometrischer Analysen eine Acetylierung durch verschiedene Histon-Acetyltransferasen (HATs) nachgewiesen werden. Diese Modifikation erfolgt reversibel, wobei vor allem die Histon-Deacetylase HDAC1 durch Interaktion mit Taspase1 die Deacetylierung der Protease katalysiert. Während Taspase1 in ihrer aktiven Konformation acetyliert vorliegt, kommt es nach Deacetylierung zu einer Reduktion ihrer enzymatischen Aktivität. Somit scheint die Modulation der Taspase1-Aktivität nicht allein über intra-proteolytische Autoaktivierung, Transport- und Interaktionsmechanismen, sondern zudem durch post-translationale Modifikationen gesteuert zu werden. Zusammenfassend konnten im Rahmen dieser Arbeit entscheidende neue Einblicke in die (patho)biologische Funktion und Feinregulation der Taspase1 gewonnen werden. Diese Ergebnisse stellen nicht nur einen wichtigen Schritt in Richtung eines verbesserten Verständnis der „Taspase1-Biologie“, sondern auch zur erfolgreichen Inhibition und Bewertung der krebsrelevanten Funktion dieser Protease dar.
Resumo:
Software is available, which simulates all basic electrophoretic systems, including moving boundary electrophoresis, zone electrophoresis, ITP, IEF and EKC, and their combinations under almost exactly the same conditions used in the laboratory. These dynamic models are based upon equations derived from the transport concepts such as electromigration, diffusion, electroosmosis and imposed hydrodynamic buffer flow that are applied to user-specified initial distributions of analytes and electrolytes. They are able to predict the evolution of electrolyte systems together with associated properties such as pH and conductivity profiles and are as such the most versatile tool to explore the fundamentals of electrokinetic separations and analyses. In addition to revealing the detailed mechanisms of fundamental phenomena that occur in electrophoretic separations, dynamic simulations are useful for educational purposes. This review includes a list of current high-resolution simulators, information on how a simulation is performed, simulation examples for zone electrophoresis, ITP, IEF and EKC and a comprehensive discussion of the applications and achievements.
Resumo:
Loading is important to maintain the balance of matrix turnover in the intervertebral disc (IVD). Daily cyclic diurnal assists in the transport of large soluble factors across the IVD and its surrounding circulation and applies direct and indirect stimulus to disc cells. Acute mechanical injury and accumulated overloading, however, could induce disc degeneration. Recently, there is more information available on how cyclic loading, especially axial compression and hydrostatic pressure, affects IVD cell biology. This review summarises recent studies on the response of the IVD and stem cells to applied cyclic compression and hydrostatic pressure. These studies investigate the possible role of loading in the initiation and progression of disc degeneration as well as quantifying a physiological loading condition for the study of disc degeneration biological therapy. Subsequently, a possible physiological/beneficial loading range is proposed. This physiological/beneficial loading could provide insight into how to design loading regimes in specific system for the testing of various biological therapies such as cell therapy, chemical therapy or tissue engineering constructs to achieve a better final outcome. In addition, the parameter space of 'physiological' loading may also be an important factor for the differentiation of stem cells towards most ideally 'discogenic' cells for tissue engineering purpose.
Resumo:
Monte Carlo (MC) based dose calculations can compute dose distributions with an accuracy surpassing that of conventional algorithms used in radiotherapy, especially in regions of tissue inhomogeneities and surface discontinuities. The Swiss Monte Carlo Plan (SMCP) is a GUI-based framework for photon MC treatment planning (MCTP) interfaced to the Eclipse treatment planning system (TPS). As for any dose calculation algorithm, also the MCTP needs to be commissioned and validated before using the algorithm for clinical cases. Aim of this study is the investigation of a 6 MV beam for clinical situations within the framework of the SMCP. In this respect, all parts i.e. open fields and all the clinically available beam modifiers have to be configured so that the calculated dose distributions match the corresponding measurements. Dose distributions for the 6 MV beam were simulated in a water phantom using a phase space source above the beam modifiers. The VMC++ code was used for the radiation transport through the beam modifiers (jaws, wedges, block and multileaf collimator (MLC)) as well as for the calculation of the dose distributions within the phantom. The voxel size of the dose distributions was 2mm in all directions. The statistical uncertainty of the calculated dose distributions was below 0.4%. Simulated depth dose curves and dose profiles in terms of [Gy/MU] for static and dynamic fields were compared with the corresponding measurements using dose difference and γ analysis. For the dose difference criterion of ±1% of D(max) and the distance to agreement criterion of ±1 mm, the γ analysis showed an excellent agreement between measurements and simulations for all static open and MLC fields. The tuning of the density and the thickness for all hard wedges lead to an agreement with the corresponding measurements within 1% or 1mm. Similar results have been achieved for the block. For the validation of the tuned hard wedges, a very good agreement between calculated and measured dose distributions was achieved using a 1%/1mm criteria for the γ analysis. The calculated dose distributions of the enhanced dynamic wedges (10°, 15°, 20°, 25°, 30°, 45° and 60°) met the criteria of 1%/1mm when compared with the measurements for all situations considered. For the IMRT fields all compared measured dose values agreed with the calculated dose values within a 2% dose difference or within 1 mm distance. The SMCP has been successfully validated for a static and dynamic 6 MV photon beam, thus resulting in accurate dose calculations suitable for applications in clinical cases.
Resumo:
One of the challenges for structural engineers during design is considering how the structure will respond to crowd-induced dynamic loading. It has been shown that human occupants of a structure do not simply add mass to the system when considering the overall dynamic response of the system, but interact with it and may induce changes of the dynamic properties from those of the empty structure. This study presents an investigation into the human-structure interaction based on several crowd characteristics and their effect on the dynamic properties of an empty structure. The dynamic properties including frequency, damping, and mode shapes were estimated for a single test structure by means of experimental modal analysis techniques. The same techniques were utilized to estimate the dynamic properties when the test structure was occupied by a crowd with different combinations of size, posture, and distribution. The goal of this study is to isolate the occupant characteristics in order to determine the significance of each to be considered when designing new structures to avoid crowd serviceability issues. The results are presented and summarized based on the level of influence of each characteristic. The posture that produces the most significant effects based on the scope of this research is standing with bent knees with a maximum decrease in frequency of the first mode of the empty structure by 32 percent atthe highest mass ratio. The associated damping also increased 36 times the damping of the empty structure. In addition to the analysis of the experimental data, finite element models and a two degree-of-freedom model were created. These models were used to gain an understanding of the test structure, model a crowd as an equivalent mass, and also to develop a single degree-of-freedom (SDOF) model to best represent a crowd of occupants based on the experimental results. The SDOF models created had an averagefrequency of 5.0 Hz, within the range presented in existing biomechanics research, and combined SDOF systems of the test structure and crowd were able to reproduce the frequency and damping ratios associated with experimental tests. Results of this study confirmed the existence of human-structure interaction andthe inability to simply model a crowd as only additional mass. The two degree-offreedom model determined was able to predict the change in natural frequency and damping ratio for a structure occupied by multiple group sizes in a single posture. These results and model are the preliminary steps in the development of an appropriate methodfor modeling a crowd in combination with a more complex FE model of the empty structure.
Resumo:
As lightweight and slender structural elements are more frequently used in the design, large scale structures become more flexible and susceptible to excessive vibrations. To ensure the functionality of the structure, dynamic properties of the occupied structure need to be estimated during the design phase. Traditional analysis method models occupants simply as an additional mass; however, research has shown that human occupants could be better modeled as an additional degree-of- freedom. In the United Kingdom, active and passive crowd models are proposed by the Joint Working Group as a result of a series of analytical and experimental research. It is expected that the crowd models would yield a more accurate estimation to the dynamic response of the occupied structure. However, experimental testing recently conducted through a graduate student project at Bucknell University indicated that the proposed passive crowd model might be inaccurate in representing the impact on the structure from the occupants. The objective of this study is to provide an assessment of the validity of the crowd models proposed by JWG through comparing the dynamic properties obtained from experimental testing data and analytical modeling results. The experimental data used in this study was collected by Firman in 2010. The analytical results were obtained by performing a time-history analysis on a finite element model of the occupied structure. The crowd models were created based on the recommendations from the JWG combined with the physical properties of the occupants during the experimental study. During this study, SAP2000 was used to create the finite element models and to implement the analysis; Matlab and ME¿scope were used to obtain the dynamic properties of the structure through processing the time-history analysis results from SAP2000. The result of this study indicates that the active crowd model could quite accurately represent the impact on the structure from occupants standing with bent knees while the passive crowd model could not properly simulate the dynamic response of the structure when occupants were standing straight or sitting on the structure. Future work related to this study involves improving the passive crowd model and evaluating the crowd models with full-scale structure models and operating data.
Resumo:
Bidirectional ITP in fused-silica capillaries double-coated with Polybrene and poly-(vinylsulfonate) is a robust approach for analysis of low-molecular-mass compounds. EOF towards the cathode is strong (mobility >4.0 x 10(-8) m(2)/Vs) within the entire pH range investigated (2.40-8.08), dependent on ionic strength and buffer used and, at constant ionic strength, higher at alkaline pH. Electrokinetic separations and transport in such coated capillaries can be described with a dynamic computer model which permits the combined simulation of electrophoresis and electroosmosis in which the EOF is predicted either with a constant (i.e. pH- and ionic strength-independent) or a pH- and ionic strength-dependent electroosmotic mobility. Detector profiles predicted by computer simulation agree qualitatively well with bidirectional isotachopherograms that are monitored with a setup comprising two axial contactless conductivity detectors and a UV absorbance detector. The varying EOF predicted with a pH- and ionic strength-dependent electroosmotic mobility can be regarded as being realistic.
Resumo:
The use of conventional orifice-plate meter is typically restricted to measurements of steady flows. This study proposes a new and effective computational-experimental approach for measuring the time-varying (but steady-in-the-mean) nature of turbulent pulsatile gas flows. Low Mach number (effectively constant density) steady-in-the-mean gas flows with large amplitude fluctuations (whose highest significant frequency is characterized by the value fF) are termed pulsatile if the fluctuations have a direct correlation with the time-varying signature of the imposed dynamic pressure difference and, furthermore, they have fluctuation amplitudes that are significantly larger than those associated with turbulence or random acoustic wave signatures. The experimental aspect of the proposed calibration approach is based on use of Coriolis-meters (whose oscillating arm frequency fcoriolis >> fF) which are capable of effectively measuring the mean flow rate of the pulsatile flows. Together with the experimental measurements of the mean mass flow rate of these pulsatile flows, the computational approach presented here is shown to be effective in converting the dynamic pressure difference signal into the desired dynamic flow rate signal. The proposed approach is reliable because the time-varying flow rate predictions obtained for two different orifice-plate meters exhibit the approximately same qualitative, dominant features of the pulsatile flow.
Resumo:
This thesis covers the correction, and verification, development, and implementation of a computational fluid dynamics (CFD) model for an orifice plate meter. Past results were corrected and further expanded on with compressibility effects of acoustic waves being taken into account. One dynamic pressure difference transducer measures the time-varying differential pressure across the orifice meter. A dynamic absolute pressure measurement is also taken at the inlet of the orifice meter, along with a suitable temperature measurement of the mean flow gas. Together these three measurements allow for an incompressible CFD simulation (using a well-tested and robust model) for the cross-section independent time-varying mass flow rate through the orifice meter. The mean value of this incompressible mass flow rate is then corrected to match the mean of the measured flow rate( obtained from a Coriolis meter located up stream of the orifice meter). Even with the mean and compressibility corrections, significant differences in the measured mass flow rates at two orifice meters in a common flow stream were observed. This means that the compressibility effects associated with pulsatile gas flows is significant in the measurement of the time-varying mass flow rate. Future work (with the approach and initial runs covered here) will provide an indirect verification of the reported mass flow rate measurements.
Resumo:
In this report, we attempt to define the capabilities of the infrared satellite remote sensor, Multifunctional Transport Satellite-2 (MTSAT-2) (i.e. a geosynchronous instrument), in characterizing volcanic eruptive behavior in the highly active region of Indonesia. Sulfur dioxide data from NASA's Ozone Monitoring Instrument (OMI) (i.e. a polar orbiting instrument) are presented here for validation of the processes interpreted using the thermal infrared datasets. Data provided from two case studies are analyzed specifically for eruptive products producing large thermal anomalies (i.e. lava flows, lava domes, etc.), volcanic ash and SO2 clouds; three distinctly characteristic and abundant volcanic emissions. Two primary methods used for detection of heat signatures are used and compared in this report including, single-channel thermal radiance (4-µm) and the normalized thermal index (NTI) algorithm. For automated purposes, fixed thresholds must be determined for these methods. A base minimum detection limit (MDL) for single-channel thermal radiance of 2.30E+05 Wm- 2sr-1m-1 and -0.925 for NTI generate false alarm rates of 35.78% and 34.16%, respectively. A spatial comparison method, developed here specifically for use in Indonesia and used as a second parameter for detection, is implemented to address the high false alarm rate. For the single-channel thermal radiance method, the utilization of the spatial comparison method eliminated 100% of the false alarms while maintaining every true anomaly. The NTI algorithm showed similar results with only 2 false alarms remaining. No definitive difference is observed between the two thermal detection methods for automated use; however, the single-channel thermal radiance method coupled with the SO2 mass abundance data can be used to interpret volcanic processes including the identification of lava dome activity at Sinabung as well as the mechanism for the dome emplacement (i.e. endogenous or exogenous). Only one technique, the brightness temperature difference (BTD) method, is used for the detection of ash. Trends of ash area, water/ice area, and their respective concentrations yield interpretations of increased ice formation, aggregation, and sedimentation processes that only a high-temporal resolution instrument like the MTSAT-2 can analyze. A conceptual model of a secondary zone of aggregation occurring in the migrating Kelut ash cloud, which decreases the distal fine-ash component and hazards to flight paths, is presented in this report. Unfortunately, SO2 data was unable to definitively reinforce the concept of a secondary zone of aggregation due to the lack of a sufficient temporal resolution. However, a detailed study of the Kelut SO2 cloud is used to determine that there was no climatic impacts generated from this eruption due to the atmospheric residence times and e-folding rate of ~14 days for the SO2. This report applies the complementary assets offered by utilizing a high-temporal and a high-spatial resolution satellite, and it demonstrates that these two instruments can provide unparalleled observations of dynamic volcanic processes.
Resumo:
Over the past several decades, it has become apparent that anthropogenic activities have resulted in the large-scale enhancement of the levels of many trace gases throughout the troposphere. More recently, attention has been given to the transport pathway taken by these emissions as they are dispersed throughout the atmosphere. The transport pathway determines the physical characteristics of emissions plumes and therefore plays an important role in the chemical transformations that can occur downwind of source regions. For example, the production of ozone (O3) is strongly dependent upon the transport its precursors undergo. O3 can initially be formed within air masses while still over polluted source regions. These polluted air masses can experience continued O3 production or O3 destruction downwind, depending on the air mass's chemical and transport characteristics. At present, however, there are a number of uncertainties in the relationships between transport and O3 production in the North Atlantic lower free troposphere. The first phase of the study presented here used measurements made at the Pico Mountain observatory and model simulations to determine transport pathways for US emissions to the observatory. The Pico Mountain observatory was established in the summer of 2001 in order to address the need to understand the relationships between transport and O3 production. Measurements from the observatory were analyzed in conjunction with model simulations from the Lagrangian particle dispersion model (LPDM), FLEX-PART, in order to determine the transport pathway for events observed at the Pico Mountain observatory during July 2003. A total of 16 events were observed, 4 of which were analyzed in detail. The transport time for these 16 events varied from 4.5 to 7 days, while the transport altitudes over the ocean ranged from 2-8 km, but were typically less than 3 km. In three of the case studies, eastward advection and transport in a weak warm conveyor belt (WCB) airflow was responsible for the export of North American emissions into the FT, while transport in the FT was governed by easterly winds driven by the Azores/Bermuda High (ABH) and transient northerly lows. In the fourth case study, North American emissions were lofted to 6-8 km in a WCB before being entrained in the same cyclone's dry airstream and transported down to the observatory. The results of this study show that the lower marine FT may provide an important transport environment where O3 production may continue, in contrast to transport in the marine boundary layer, where O3 destruction is believed to dominate. The second phase of the study presented here focused on improving the analysis methods that are available with LPDMs. While LPDMs are popular and useful for the analysis of atmospheric trace gas measurements, identifying the transport pathway of emissions from their source to a receptor (the Pico Mountain observatory in our case) using the standard gridded model output, particularly during complex meteorological scenarios can be difficult can be difficult or impossible. The transport study in phase 1 was limited to only 1 month out of more than 3 years of available data and included only 4 case studies out of the 16 events specifically due to this confounding factor. The second phase of this study addressed this difficulty by presenting a method to clearly and easily identify the pathway taken by only those emissions that arrive at a receptor at a particular time, by combining the standard gridded output from forward (i.e., concentrations) and backward (i.e., residence time) LPDM simulations, greatly simplifying similar analyses. The ability of the method to successfully determine the source-to-receptor pathway, restoring this Lagrangian information that is lost when the data are gridded, is proven by comparing the pathway determined from this method with the particle trajectories from both the forward and backward models. A sample analysis is also presented, demonstrating that this method is more accurate and easier to use than existing methods using standard LPDM products. Finally, we discuss potential future work that would be possible by combining the backward LPDM simulation with gridded data from other sources (e.g., chemical transport models) to obtain a Lagrangian sampling of the air that will eventually arrive at a receptor.
Resumo:
Lipoproteins are a heterogeneous population of blood plasma particles composed of apolipoproteins and lipids. Lipoproteins transport exogenous and endogenous triglycerides and cholesterol from sites of absorption and formation to sites of storage and usage. Three major classes of lipoproteins are distinguished according to their density: high-density (HDL), low-density (LDL) and very low-density lipoproteins (VLDL). While HDLs contain mainly apolipoproteins of lower molecular weight, the two other classes contain apolipoprotein B and apolipoprotein (a) together with triglycerides and cholesterol. HDL concentrations were found to be inversely related to coronary heart disease and LDL/VLDL concentrations directly related. Although many studies have been published in this area, few have concentrated on the exact protein composition of lipoprotein particles. Lipoproteins were separated by density gradient ultracentrifugation into different subclasses. Native gel electrophoresis revealed different gel migration behaviour of the particles, with less dense particles having higher apparent hydrodynamic radii than denser particles. Apolipoprotein composition profiles were measured by matrix-assisted laser desorption/ionization-mass spectrometry on a macromizer instrument, equipped with the recently introduced cryodetector technology, and revealed differences in apolipoprotein composition between HDL subclasses. By combining these profiles with protein identifications from native and denaturing polyacrylamide gels by liquid chromatography-tandem mass spectrometry, we characterized comprehensively the exact protein composition of different lipoprotein particles. We concluded that the differential display of protein weight information acquired by macromizer mass spectrometry is an excellent tool for revealing structural variations of different lipoprotein particles, and hence the foundation is laid for the screening of cardiovascular disease risk factors associated with lipoproteins.
Resumo:
Dynamic models for electrophoresis are based upon model equations derived from the transport concepts in solution together with user-inputted conditions. They are able to predict theoretically the movement of ions and are as such the most versatile tool to explore the fundamentals of electrokinetic separations. Since its inception three decades ago, the state of dynamic computer simulation software and its use has progressed significantly and Electrophoresis played a pivotal role in that endeavor as a large proportion of the fundamental and application papers were published in this periodical. Software is available that simulates all basic electrophoretic systems, including moving boundary electrophoresis, zone electrophoresis, ITP, IEF and EKC, and their combinations under almost exactly the same conditions used in the laboratory. This has been employed to show the detailed mechanisms of many of the fundamental phenomena that occur in electrophoretic separations. Dynamic electrophoretic simulations are relevant for separations on any scale and instrumental format, including free-fluid preparative, gel, capillary and chip electrophoresis. This review includes a historical overview, a survey of current simulators, simulation examples and a discussion of the applications and achievements of dynamic simulation.
Resumo:
Continuous conveyors with a dynamic merge were developed with adaptable control equipment to differentiate these merges from competing Stop-and-Go merges. With a dynamic merge, the partial flows are manipulated by influencing speeds so that transport units need not stop for the merge. This leads to a more uniform flow of materials, which is qualitatively observable and verifiable in long-term measurements. And although this type of merge is visually mesmerizing, does it lead to advantages from the view of material flow technology? Our study with real data indicates that a dynamic merge shows a 24% increase in performance, but only for symmetric or nearly symmetric flows. This performance advantage decreases as the flows become less symmetric, approaching the throughput of traditional Stop-and-Go merges. And with a cost premium for a continuous merge of approximately 10% due to the additional technical components (belt conveyor, adjustable drive engines, software, etc.), this restricts their economical use.
Resumo:
BACKGROUND Mechanical unloading of failing hearts can trigger functional recovery but results in progressive atrophy and possibly detrimental adaptation. In an unbiased approach, we examined the dynamic effects of unloading duration on molecular markers indicative of myocardial damage, hypothesizing that potential recovery may be improved by optimized unloading time. METHODS Heterotopically transplanted normal rat hearts were harvested at 3, 8, 15, 30, and 60 days. Forty-seven genes were analyzed using TaqMan-based microarray, Western blot, and immunohistochemistry. RESULTS In parallel with marked atrophy (22% to 64% volume loss at 3 respectively 60 days), expression of myosin heavy-chain isoforms (MHC-α/-β) was characteristically switched in a time-dependent manner. Genes involved in tissue remodeling (FGF-2, CTGF, TGFb, IGF-1) were increasingly upregulated with duration of unloading. A distinct pattern was observed for genes involved in generation of contractile force; an indiscriminate early downregulation was followed by a new steady-state below normal. For pro-apoptotic transcripts bax, bnip-3, and cCasp-6 and -9 mRNA levels demonstrated a slight increase up to 30 days unloading with pronunciation at 60 days. Findings regarding cell death were confirmed on the protein level. Proteasome activity indicated early increase of protein degradation but decreased below baseline in unloaded hearts at 60 days. CONCLUSIONS We identified incrementally increased apoptosis after myocardial unloading of the normal rat heart, which is exacerbated at late time points (60 days) and inversely related to loss of myocardial mass. Our findings suggest an irreversible detrimental effect of long-term unloading on myocardium that may be precluded by partial reloading and amenable to molecular therapeutic intervention.