872 resultados para Heat pumps, load modelling, power quality, power system dynamics, power system simulation
Resumo:
This study is focused on radio-frequency inductively coupled thermal plasma (ICP) synthesis of nanoparticles, combining experimental and modelling approaches towards process optimization and industrial scale-up, in the framework of the FP7-NMP SIMBA European project (Scaling-up of ICP technology for continuous production of Metallic nanopowders for Battery Applications). First the state of the art of nanoparticle production through conventional and plasma routes is summarized, then results for the characterization of the plasma source and on the investigation of the nanoparticle synthesis phenomenon, aiming at highlighting fundamental process parameters while adopting a design oriented modelling approach, are presented. In particular, an energy balance of the torch and of the reaction chamber, employing a calorimetric method, is presented, while results for three- and two-dimensional modelling of an ICP system are compared with calorimetric and enthalpy probe measurements to validate the temperature field predicted by the model and used to characterize the ICP system under powder-free conditions. Moreover, results from the modeling of critical phases of ICP synthesis process, such as precursor evaporation, vapour conversion in nanoparticles and nanoparticle growth, are presented, with the aim of providing useful insights both for the design and optimization of the process and on the underlying physical phenomena. Indeed, precursor evaporation, one of the phases holding the highest impact on industrial feasibility of the process, is discussed; by employing models to describe particle trajectories and thermal histories, adapted from the ones originally developed for other plasma technologies or applications, such as DC non-transferred arc torches and powder spherodization, the evaporation of micro-sized Si solid precursor in a laboratory scale ICP system is investigated. Finally, a discussion on the role of thermo-fluid dynamic fields on nano-particle formation is presented, as well as a study on the effect of the reaction chamber geometry on produced nanoparticle characteristics and process yield.
Resumo:
Aging is characterized by a chronic, low-grade inflammatory state called “inflammaging”. Mitochondria are the main source of reactive oxygen species (ROS), which trigger the production of pro-inflammatory molecules. We are interested in studying the age-related modifications of the mitochondrial DNA (mtDNA), which can be affected by the lifelong exposure to ROS and are responsible of mitochondrial dysfunction. Moreover, increasing evidences show that telomere shortening, naturally occurring with aging, is involved in mtDNA damage processes and thus in the pathogenesis of age-related disorders. Thus the primary aim of this thesis was the analysis of mtDNA copy number, deletion level and integrity in different-age human biopsies from liver, vastus lateralis skeletal muscle of healthy subjects and patients with limited mobility of lower limbs (LMLL), as well as adipose tissue. The telomere length and the expression of nuclear genes related to mitobiogenesis, fusion and fission, mitophagy, mitochondrial protein quality control system, hypoxia, production and protection from ROS were also evaluated. In liver the decrease in mtDNA integrity with age is accompanied with an increase in mtDNA copy number, suggesting the existence of a “compensatory mechanism” able to maintain the functionality of this organ. Different is the case of vastus lateralis muscle, where any “compensatory pathway” is activated and mtDNA integrity and copy number decrease with age, both in healthy subjects and in patients. Interestingly, mtDNA rearrangements do not incur in adipose tissue with advancing age. Finally, in all tissues a marked gender difference appears, suggesting that aging and also gender diversely affect mtDNA rearrangements and telomere length in the three human tissues considered, likely depending on their different metabolic needs and inflammatory status.
Resumo:
Numerical modelling was performed to study the dynamics of multilayer detachment folding and salt tectonics. In the case of multilayer detachment folding, analytically derived diagrams show several folding modes, half of which are applicable to crustal scale folding. 3D numerical simulations are in agreement with 2D predictions, yet fold interactions result in complex fold patterns. Pre-existing salt diapirs change folding patterns as they localize the initial deformation. If diapir spacing is much smaller than the dominant folding wavelength, diapirs appear in fold synclines or limbs.rnNumerical models of 3D down-building diapirism show that sedimentation rate controls whether diapirs will form and influences the overall patterns of diapirism. Numerical codes were used to retrodeform modelled salt diapirs. Reverse modelling can retrieve the initial geometries of a 2D Rayleigh-Taylor instability with non-linear rheologies. Although intermediate geometries of down-built diapirs are retrieved, forward and reverse modelling solutions deviate. rnFinally, the dynamics of fold-and-thrusts belts formed over a tilted viscous detachment is studied and it is demonstrated that mechanical stratigraphy has an impact on the deformation style, switching from thrust- to folding-dominated. The basal angle of the detachment controls the deformation sequence of the fold-and-thrust belt and results are consistent with critical wedge theory.rn
Resumo:
Reliable data transfer is one of the most difficult tasks to be accomplished in multihop wireless networks. Traditional transport protocols like TCP face severe performance degradation over multihop networks given the noisy nature of wireless media as well as unstable connectivity conditions in place. The success of TCP in wired networks motivates its extension to wireless networks. A crucial challenge faced by TCP over these networks is how to operate smoothly with the 802.11 wireless MAC protocol which also implements a retransmission mechanism at link level in addition to short RTS/CTS control frames for avoiding collisions. These features render TCP acknowledgments (ACK) transmission quite costly. Data and ACK packets cause similar medium access overheads despite the much smaller size of the ACKs. In this paper, we further evaluate our dynamic adaptive strategy for reducing ACK-induced overhead and consequent collisions. Our approach resembles the sender side's congestion control. The receiver is self-adaptive by delaying more ACKs under nonconstrained channels and less otherwise. This improves not only throughput but also power consumption. Simulation evaluations exhibit significant improvement in several scenarios
Resumo:
As the development of genotyping and next-generation sequencing technologies, multi-marker testing in genome-wide association study and rare variant association study became active research areas in statistical genetics. This dissertation contains three methodologies for association study by exploring different genetic data features and demonstrates how to use those methods to test genetic association hypothesis. The methods can be categorized into in three scenarios: 1) multi-marker testing for strong Linkage Disequilibrium regions, 2) multi-marker testing for family-based association studies, 3) multi-marker testing for rare variant association study. I also discussed the advantage of using these methods and demonstrated its power by simulation studies and applications to real genetic data.
Resumo:
The development of path-dependent processes basically refers to positive feedback in terms of increasing returns as the main driving forces of such processes. Furthermore, path dependence can be affected by context factors, such as different degrees of complexity. Up to now, it has been unclear whether and how different settings of complexity impact path-dependent processes and the probability of lock-in. In this paper we investigate the relationship between environmental complexity and path dependence by means of an experimental study. By focusing on the mode of information load and decision quality in chronological sequences, the study explores the impact of complexity on decision-making processes. The results contribute to both the development of path-dependence theory and a better understanding of decision-making behavior under conditions of positive feedback. Since previous path research has mostly applied qualitative case-study research and (to a minor part) simulations, this paper makes a further contribution by establishing an experimental approach for research on path dependence.
Resumo:
Generative Fertigungsverfahren haben sich in den letzten Jahren als effektive Werkzeuge für die schnelle Entwicklung von Produkten nahezu beliebiger Komplexität entwickelt. Gleichzeitig wird gefordert, die Reproduzierbarkeit der Bauteile und auch seriennahe bzw. seriengleiche Eigenschaften zu gewährleisten. Die Vielfalt und der Umfang der Anwendungen sowie die große Anzahl verschiedener generativer Fertigungsverfahren verlangen adäquate Qualitätsüberwachungs- und Qualitätskontrollsysteme. Ein Lösungsansatz für die Qualitätsbewertung von generativen Fertigungsverfahren besteht in der Einführung eines Kennzahlensystems. Hierzu müssen zunächst Anforderungsprofile und Qualitätsmerkmale für generativ hergestellte Bauteile definiert werden, welche durch Prüfkörpergeometrien abgebildet und mit Hilfe von Einzelkennzahlen klassifiziert werden. In Rahmen der durchgeführten Untersuchungen wurde die Qualitätsbewertung anhand von Prüfkörpergeometrien am Beispiel des Laser-Sinterprozesses qualifiziert. Durch Beeinflussung der Prozessparameter, d.h. der gezielten Einbringung von Störgrößen, welche einzeln oder in Kombination zu unzulässigen Qualitätsschwankungen führen können, ist es möglich, die Qualität des Produktes zu beurteilen. Die Definition von Einzelkennzahlen, die eine Steuerung und Kontrolle sowie eine Vorhersage potentieller Fehler ermöglicht, bietet hierbei essentielle Möglichkeiten zur Qualitätsbewertung. Eine Zusammenführung zu einem gesamtheitlichen Kennzahlensystem soll zum einen den Prozess auf Grundlage der definierten Anforderungsprofile bewerten und zum anderen einen direkten Zusammenhang der ausgewählten Störgrößen und Prozessgrößen herleiten, um vorab eine Aussage über die Bauteilqualität treffen zu können.
Resumo:
Zur Versachlichung der Diskussion über die TK-Studie zum Effekt eines Qualitätsmonitorings in der ambulanten Psychotherapie hat der wissenschaftliche Beirat die Ergebnisse aus seiner Sichtweise dargestellt. Zur Hauptfragestellung wird der Abschlussbericht zitiert, der bestätigt, dass es sich um eine konfirmatorische Untersuchung handelte. Im Kern sollte sie die Hypothesen zur Überlegenheit des TK-Modells gegenüber dem Verfahren der Gutachterverfahren überprüfen. Beim TK-Modell handelt es sich um eine „Komplexintervention“, die aus mehreren Bausteinen bestand. Die Studienergebnisse lassen somit nur die Aussage zu, dass diese Komplexintervention in ihrer Kombination keine Überlegenheit gezeigt hat. Ob einzelne Bausteine Wirksamkeit hatten, bedarf weiterer Forschung. Schließlich werden das Repräsentativitäts- und das Selektivitätsproblem der Studie bzw. der verwertbaren Stichproben erläutert und mit Verweis auf die Literatur wird deren Relevanz dargelegt.
Resumo:
The nonsense-mediated mRNA decay (NMD) pathway is best known as a translation-coupled quality control system that recognizes and degrades aberrant mRNAs with ORF-truncating premature termination codons (PTCs), but a more general role of NMD in posttranscriptional regulation of gene expression is indicated by transcriptome-wide mRNA profilings that identified a plethora of physiological mRNAs as NMD substrates. We try to decipher the mechanism of mRNA targeting to the NMD pathway in human cells. Recruitment of the conserved RNA-binding helicase UPF1 to target mRNAs has been reported to occur through interaction with release factors at terminating ribosomes, but evidence for translation-independent interaction of UPF1 with the 3’ untranslated region (UTR) of mRNAs has also been reported. We have transcriptome-wide determined the UPF1 binding sites by individual-nucleotide resolution UV crosslinking and immunoprecipitation (iCLIP) in human cells, untreated or after inhibiting translation. We detected a strongly enriched association of UPF1 with 3’ UTRs in undisturbed, translationally active cells. After translation inhibition, a significant increase in UPF1 binding to coding sequence (CDS) was observed, indicating that UPF1 binds RNA before translation and gets displaced from the CDS by translating ribosomes. This suggests that the decision to trigger NMD occurs after association of UPF1 with mRNA, presumably through activation of RNA-bound UPF1 by aberrant translation termination. In a second recent study, we re-visited the reported restriction of NMD in mammals to the ‘pioneer round of translation’, i.e. to cap-binding complex (CBC)-bound mRNAs. The limitation of mammalian NMD to early rounds of translation would indicate a – from an evolutionary perspective – unexpected mechanistic difference to NMD in yeast and plants, where PTC-containing mRNAs seem to be available to NMD at each round of translation. In contrast to previous reports, our comparison of decay kinetics of two NMD reporter genes in mRNA fractions bound to either CBC or the eukaryotic initiation factor 4E (eIF4E) in human cells revealed that NMD destabilizes eIF4E-bound transcripts as efficiently as those associated with CBC. These results corroborate an emerging unified model for NMD substrate recognition, according to which NMD can ensue at every aberrant translation termination event.
Resumo:
OBJECTIVE The first description of the simplified acute physiology score (SAPS) II dates back to 1993, but little is known about its accuracy in daily practice. Our purpose was to evaluate the accuracy of scoring and the factors that affect it in a nationwide survey. METHODS Twenty clinical scenarios, covering a broad range of illness severities, were randomly assigned to a convenience sample of physicians or nurses in Swiss adult intensive care units (ICUs), who were asked to assess the SAPS II score for a single scenario. These data were compared to a reference that was defined by five experienced researchers. The results were cross-matched with demographic characteristics and data on the training and quality control for the scoring, structural and organisational properties of each participating ICU. RESULTS A total of 345 caregivers from 53 adult ICU providers completed the SAPS II evaluation of one clinical scenario. The mean SAPS II scoring was 42.6 ± 23.4, with a bias of +5.74 (95%CI 2.0-9.5) compared to the reference score. There was no evidence of bias variation according to the case severity, ICU size, linguistic area, profession (physician vs. nurse), experience, initial SAPS II training, or presence of a quality control system. CONCLUSION This nationwide survey revealed substantial variability in the SAPS II scoring results. On average, SAPS II scoring was overestimated by more than 13%, irrespective of the profession or experience of the scorer or of the structural characteristics of the ICUs.
Resumo:
Do apprenticeships convey mainly general or also firm- and occupation-specific human capital? Specific human capital may allow for specialization gains, but may also lead to allocative inefficiency due to mobility barriers. We analyse the case of Switzerland, which combines a comprehensive, high-quality apprenticeship system with a lightly regulated labour market. To assess human capital transferability after standardized firm-based apprenticeship training, we analyse inter-firm and occupational mobility and their effects on post-training wages. Using a longitudinal data set based on the PISA 2000 survey, we find high inter-firm and low occupational mobility within one year after graduation. Accounting for endogenous changes, we find a negative effect of occupation changes on wages, but no significant wage effect for firm changes. This indicates that occupation-specific human capital is an important component of apprenticeship training and that skills are highly transferable within an occupational field.
Resumo:
The purpose of the multiple case-study was to determine how hospital subsystems (such as physician monitoring and credentialing; quality assurance; risk management; and peer review) were supporting the monitoring of physicians? Three large metropolitan hospitals in Texas were studied and designated as hospitals #1, #2, and #3. Realizing that hospital subsystems are a unique entity and part of a larger system, conclusions were made on the premises of a quality control system, in relation to the tools of government (particularly the Health Care Quality Improvement Act (HCQIA)), and in relation to itself as a tool of a hospital.^ Three major analytical assessments were performed. First, the subsystems were analyzed as to their "completeness"; secondly, the subsystems were analyzed for "performance"; and thirdly, the subsystems were analyzed in reference to the interaction of completeness and performance.^ The physician credentialing and monitoring and the peer review subsystems as quality control systems were most complete, efficient, and effective in hospitals #1 and #3. The HCQIA did not seem to be an influencing factor in the completeness of the subsystem in hospital #1. The quality assurance and risk management subsystem in hospital #2 was not representative of completeness and performance and the HCQIA was not an influencing factor in the completeness of the Q.A. or R.M. systems in any hospital. The efficiency (computerization) of the physician credentialing, quality assurance and peer review subsystems in hospitals #1 and #3 seemed to contribute to their effectiveness (system-wide effect).^ The results indicated that the more complete, effective, and efficient subsystems were characterized by (1) all defined activities being met, (2) the HCQIA being an influencing factor, (3) a decentralized administrative structure, (4) computerization an important element, and (5) staff was sophisticated in subsystem operations. However, other variables were identified which deserve further research as to their effect on completeness and performance of subsystems. They include (1) medical staff affiliations, (2) system funding levels, (3) the system's administrative structure, and (4) the physician staff "cultural" characteristics. Perhaps by understanding other influencing factors, health care administrators may plan subsystems that will be compatible with legislative requirements and administrative objectives. ^
Resumo:
Para la implementación de un sistema de control de calidad es necesario tener una clara visión de los procesos involucrados, las responsabilidades, organización, registros de la calidad, acciones preventivas y correctivas. En la actualidad se exige un enfoque integrador, donde se estructuran actividades y recursos físicos. Es por ello que los diagramas de flujo tienen una importancia fundamental en las decisiones de dirección. Entre los distintos diagramas de flujo posibles, en este trabajo se propone utilizar los de procesos que permiten ordenar secuencialmente una serie de actividades y recursos físicos. Se presenta un diagrama de flujo en la elaboración de vino blanco en una bodega experimental. A través de él se puede conocer la lógica del proceso de elaboración, realizar un diagnóstico del proceso, detectar los puntos críticos de control que puedan provocar deterioro en la calidad del producto o del ambiente. También es un punto de partida para armar y poner en funcionamiento un plan de inspección y ensayos, para la documentación necesaria en las auditorías y una técnica para detectar puntos de contaminación ambiental.
Resumo:
Stable isotope and ice-rafted debris records from three core sites in the mid-latitude North Atlantic (IODP Site U1313, MD01-2446, MD03-2699) are combined with records of ODP Sites 1056/1058 and 980 to reconstruct hydrographic conditions during the middle Pleistocene spanning Marine Isotope Stages (MIS) 9-14 (300-540 ka). Core MD03-2699 is the first high-resolution mid-Brunhes record from the North Atlantic's eastern boundary upwelling system covering the complete MIS 11c interval and MIS 13. The array of sites reflect western and eastern basin boundary current as well as north to south transect sampling of subpolar and transitional water masses and allow the reconstruction of transport pathways in the upper limb of the North Atlantic's circulation. Hydrographic conditions in the surface and deep ocean during peak interglacial MIS 9 and 11 were similar among all the sites with relative stable conditions and confirm prolonged warmth during MIS 11c also for the mid-latitudes. Sea surface temperature (SST) reconstructions further reveal that in the mid-latitude North Atlantic MIS 11c is associated with two plateaus, the younger one of which is slightly warmer. Enhanced subsurface northward heat transport in the eastern boundary current system, especially during early MIS 11c, is denoted by the presence of tropical planktic foraminifer species and raises the question how strongly it impacted the Portuguese upwelling system. Deep water ventilation at the onset of MIS 11c significantly preceded surface water ventilation. Although MIS 13 was generally colder and more variable than the younger interglacials the surface water circulation scheme was the same. The greatest differences between the sites existed during the glacial inceptions and glacials. Then a north - south trending hydrographic front separated the nearshore and offshore waters off Portugal. While offshore waters originated from the North Atlantic Current as indicated by the similarities between the records of IODP Site U1313, ODP Site 980 and MD01-2446, nearshore waters as recorded in core MD03-2699 derived from the Azores Current and thus the subtropical gyre. Except for MIS 12, Azores Current influence seems to be related to eastern boundary system dynamics and not to changes in the Atlantic overturning circulation.
Resumo:
The western warm pools of the Atlantic and Pacific oceans are a critical source of heat and moisture for the tropical climate system. Over the past five million years, global mean temperatures have cooled by 3-4 °C. Yet, current reconstructions of sea surface temperatures indicate that temperature in the warm pools has remained stable during this time. This stability has been used to suggest that tropical sea-surface temperatures are controlled by some sort of thermostat-like regulation. Here we reconstruct sea surface temperatures in the South China Sea, Caribbean Sea and western equatorial Pacific Ocean for the past five million years, using a combination of the Mg/Ca, TEXH86-and Uk'37 surface temperature proxies. Our data indicate that during the period of Pliocene warmth from about 5 to 2.6 million years ago, the western Pacific and western Atlantic warm pools were about 2 °C warmer than today. We suggest that the apparent lack of warming seen in the previous reconstructions was an artefact of low seawater Mg/Ca ratios in the Pliocene oceans. Taking this bias into account, our data indicate that tropical sea surface temperatures did change in conjunction with global mean temperatures. We therefore conclude that the temperature of the warm pools of the equatorial oceans during the Pliocene was not limited by a thermostat-like mechanism.