989 resultados para Phase-space Methods
Resumo:
Biofuels for transport are a renewable source of energy that were once heralded as a solution to multiple problems associated with poor urban air quality, the overproduction of agricultural commodities, the energy security of the European Union (EU) and climate change. It was only after the Union had implemented an incentivizing framework of legal and political instruments for the production, trade and consumption of biofuels that the problems of weakening food security, environmental degradation and increasing greenhouse gases through land-use changes began to unfold. In other words, the difference between political aims for why biofuels are promoted and their consequences has grown – which is also recognized by the EU policy-makers. Therefore, the global networks of producing, trading and consuming biofuels may face a complete restructure if the European Commission accomplishes its pursuit to sideline crop-based biofuels after 2020. My aim with this dissertation is not only to trace the manifold evolutions of the instruments used by the Union to govern biofuels but also to reveal how this evolution has influenced the dynamics of biofuel development. Therefore, I study the ways the EU’s legal and political instruments of steering biofuels are coconstitutive with the globalized spaces of biofuel development. My analytical strategy can be outlined through three concepts. I use the term ‘assemblage’ to approach the operations of the loose entity of actors and non-human elements that are the constituents of multi-scalar and -sectorial biofuel development. ‘Topology’ refers to the spatiality of this European biofuel assemblage and its parts whose evolving relations are treated as the active constituents of space, instead of simply being located in space. I apply the concept of ‘nomosphere’ to characterize the framework of policies, laws and other instruments that the EU applies and construes while attempting to govern biofuels. Even though both the materials and methods vary in the independent articles, these three concepts characterize my analytical strategy that allows me to study law, policy and space associated with each other. The results of my examinations underscore the importance of the instruments of governance of the EU constituting and stabilizing the spaces of producing and, on the other hand, how topological ruptures in biofuel development have enforced the need to reform policies. This analysis maps the vast scope of actors that are influenced by the mechanism of EU biofuel governance and, what is more, shows how they are actively engaging in the Union’s institutional policy formulation. By examining the consequences of fast biofuel development that are spatially dislocated from the established spaces of producing, trading and consuming biofuels such as indirect land use changes, I unfold the processes not tackled by the instruments of the EU. Indeed, it is these spatially dislocated processes that have pushed the Commission construing a new type of governing biofuels: transferring the instruments of climate change mitigation to land-use policies. Although efficient in mitigating these dislocated consequences, these instruments have also created peculiar ontological scaffolding for governing biofuels. According to this mode of governance, the spatiality of biofuel development appears to be already determined and the agency that could dampen the negative consequences originating from land-use practices is treated as irrelevant.
Resumo:
This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.
Resumo:
Heat transfer effectiveness in nuclear rod bundles is of great importance to nuclear reactor safety and economics. An important design parameter is the Critical Heat Flux (CHF), which limits the transferred heat from the fuel to the coolant. The CHF is determined by flow behaviour, especially the turbulence created inside the fuel rod bundle. Adiabatic experiments can be used to characterize the flow behaviour separately from the heat transfer phenomena in diabatic flow. To enhance the turbulence, mixing vanes are attached to spacer grids, which hold the rods in place. The vanes either make the flow swirl around a single sub-channel or induce cross-mixing between adjacent sub-channels. In adiabatic two-phase conditions an important phenomenon that can be investigated is the effect of the spacer on canceling the lift force, which collects the small bubbles to the rod surfaces leading to decreased CHF in diabatic conditions and thus limits the reactor power. Computational Fluid Dynamics (CFD) can be used to simulate the flow numerically and to test how different spacer configurations affect the flow. Experimental data is needed to validate and verify the used CFD models. Especially the modeling of turbulence is challenging even for single-phase flow inside the complex sub-channel geometry. In two-phase flow other factors such as bubble dynamics further complicate the modeling. To investigate the spacer grid effect on two-phase flow, and to provide further experimental data for CFD validation, a series of experiments was run on an adiabatic sub-channel flow loop using a duct-type spacer grid with different configurations. Utilizing the wire-mesh sensor technology, the facility gives high resolution experimental data in both time and space. The experimental results indicate that the duct-type spacer grid is less effective in canceling the lift force effect than the egg-crate type spacer tested earlier.
Resumo:
A high-frequency cyclonverter acts as a direct ac-to-ac power converter circuit that does not require a diode bidge rectifier. Bridgeless topology makes it possible to remove forward voltage drop losses that are present in a diode bridge. In addition, the on-state losses can be reduced to 1.5 times the on-state resistance of switches in half-bridge operation of the cycloconverter. A high-frequency cycloconverter is reviewed and the charging effect of the dc-capacitors in ``back-to-back'' or synchronous mode operation operation is analyzed. In addition, a control method is introduced for regulating dc-voltage of the ac-side capacitors in synchronous operation mode. The controller regulates the dc-capacitors and prevents switches from reaching overvoltage level. This can be accomplished by variating phase-shift between the upper and the lower gate signals. By adding phase-shift between the gate signal pairs, the charge stored in the energy storage capacitors can be discharged through the resonant load and substantially, the output resonant current amplitude can be improved. The above goals are analyzed and illustrated with simulation. Theory is supported with practical measurements where the proposed control method is implemented in an FPGA device and tested with a high-frequency cycloconverter using super-junction power MOSFETs as switching devices.
Resumo:
Permanent magnet synchronous machines (PMSM) have become widely used in applications because of high efficiency compared to synchronous machines with exciting winding or to induction motors. This feature of PMSM is achieved through the using the permanent magnets (PM) as the main excitation source. The magnetic properties of the PM have significant influence on all the PMSM characteristics. Recent observations of the PM material properties when used in rotating machines revealed that in all PMSMs the magnets do not necessarily operate in the second quadrant of the demagnetization curve which makes the magnets prone to hysteresis losses. Moreover, still no good analytical approach has not been derived for the magnetic flux density distribution along the PM during the different short circuits faults. The main task of this thesis is to derive simple analytical tool which can predict magnetic flux density distribution along the rotor-surface mounted PM in two cases: during normal operating mode and in the worst moment of time from the PM’s point of view of the three phase symmetrical short circuit. The surface mounted PMSMs were selected because of their prevalence and relatively simple construction. The proposed model is based on the combination of two theories: the theory of the magnetic circuit and space vector theory. The comparison of the results in case of the normal operating mode obtained from finite element software with the results calculated with the proposed model shows good accuracy of model in the parts of the PM which are most of all prone to hysteresis losses. The comparison of the results for three phase symmetrical short circuit revealed significant inaccuracy of the proposed model compared with results from finite element software. The analysis of the inaccuracy reasons was provided. The impact on the model of the Carter factor theory and assumption that air have permeability of the PM were analyzed. The propositions for the further model development are presented.
Resumo:
We evaluated the expression of 10 adhesion molecules on peripheral blood tumor cells of 17 patients with chronic lymphocytic leukemia, 17 with mantle-cell lymphoma, and 13 with nodal or splenic marginal B-cell lymphoma, all in the leukemic phase and before the beginning of any therapy. The diagnosis of B-cell non-Hodgkin's lymphomas was based on cytological, histological, immunophenotypic, and molecular biology methods. The mean fluorescence intensity of the adhesion molecules in tumor cells was measured by flow cytometry of CD19-positive cells and differed amongst the types of lymphomas. Comparison of chronic lymphocytic leukemia and mantle-cell lymphoma showed that the former presented a higher expression of CD11c and CD49c, and a lower expression of CD11b and CD49d adhesion molecules. Comparison of chronic lymphocytic leukemia and marginal B-cell lymphoma showed that the former presented a higher expression of CD49c and a lower expression of CD11a, CD11b, CD18, CD49d, CD29, and CD54. Finally, comparison of mantle-cell lymphoma and marginal B-cell lymphoma showed that marginal B-cell lymphoma had a higher expression of CD11a, CD11c, CD18, CD29, and CD54. Thus, the CD49c/CD49d pair consistently demonstrated a distinct pattern of expression in chronic lymphocytic leukemia compared with mantle-cell lymphoma and marginal B-cell lymphoma, which could be helpful for the differential diagnosis. Moreover, the distinct profiles of adhesion molecules in these diseases may be responsible for their different capacities to invade the blood stream.
Resumo:
Crystal properties, product quality and particle size are determined by the operating conditions in the crystallization process. Thus, in order to obtain desired end-products, the crystallization process should be effectively controlled based on reliable kinetic information, which can be provided by powerful analytical tools such as Raman spectrometry and thermal analysis. The present research work studied various crystallization processes such as reactive crystallization, precipitation with anti-solvent and evaporation crystallization. The goal of the work was to understand more comprehensively the fundamentals, phenomena and utilizations of crystallization, and establish proper methods to control particle size distribution, especially for three phase gas-liquid-solid crystallization systems. As a part of the solid-liquid equilibrium studies in this work, prediction of KCl solubility in a MgCl2-KCl-H2O system was studied theoretically. Additionally, a solubility prediction model by Pitzer thermodynamic model was investigated based on solubility measurements of potassium dihydrogen phosphate with the presence of non-electronic organic substances in aqueous solutions. The prediction model helps to extend literature data and offers an easy and economical way to choose solvent for anti-solvent precipitation. Using experimental and modern analytical methods, precipitation kinetics and mass transfer in reactive crystallization of magnesium carbonate hydrates with magnesium hydroxide slurry and CO2 gas were systematically investigated. The obtained results gave deeper insight into gas-liquid-solid interactions and the mechanisms of this heterogeneous crystallization process. The research approach developed can provide theoretical guidance and act as a useful reference to promote development of gas-liquid reactive crystallization. Gas-liquid mass transfer of absorption in the presence of solid particles in a stirred tank was investigated in order to gain understanding of how different-sized particles interact with gas bubbles. Based on obtained volumetric mass transfer coefficient values, it was found that the influence of the presence of small particles on gas-liquid mass transfer cannot be ignored since there are interactions between bubbles and particles. Raman spectrometry was successfully applied for liquid and solids analysis in semi-batch anti-solvent precipitation and evaporation crystallization. Real-time information such as supersaturation, formation of precipitates and identification of crystal polymorphs could be obtained by Raman spectrometry. The solubility prediction models, monitoring methods for precipitation and empirical model for absorption developed in this study together with the methodologies used gives valuable information for aspects of industrial crystallization. Furthermore, Raman analysis was seen to be a potential controlling method for various crystallization processes.
Resumo:
Product assurance is an essential part of product development process if developers want to ensure that final product is safe and reliable. Product assurance can be supported withrisk management and with different failure analysis methods. Product assurance is emphasized in system development process of mission critical systems. The product assurance process in systems of this kind requires extra attention. Inthis thesis, mission critical systems are space systems and the product assurance processof these systems is presented with help of space standards. The product assurance process can be supported with agile development because agile emphasizes transparency of the process and fast response to changes. Even if the development process of space systems is highly standardized and reminds waterfall model, it is still possible to adapt agile development in space systems development. This thesisaims to support the product assurance process of space systems with agile developmentso that the final product would be as safe and reliable as possible. The main purpose of this thesis is to examine how well product assurance is performed in Finnish space organizations and how product assurance tasks and activities can besupported with agile development. The research part of this thesis is performed in survey form.
Resumo:
Product assurance is an essential part of product development process if developers want to ensure that final product is safe and reliable. Product assurance can be supported with risk management and with different failure analysis methods. Product assurance is emphasized in system development process of mission critical systems. The product assurance process in systems of this kind requires extra attention. In this thesis, mission critical systems are space systems and the product assurance process of these systems is presented with help of space standards. The product assurance process can be supported with agile development because agile emphasizes transparency of the process and fast response to changes. Even if the development process of space systems is highly standardized and reminds waterfall model, it is still possible to adapt agile development in space systems development. This thesis aims to support the product assurance process of space systems with agile development so that the final product would be as safe and reliable as possible. The main purpose of this thesis is to examine how well product assurance is performed in Finnish space organizations and how product assurance tasks and activities can be supported with agile development. The research part of this thesis is performed in survey form.
Resumo:
Intelligence from a human source, that is falsely thought to be true, is potentially more harmful than a total lack of it. The veracity assessment of the gathered intelligence is one of the most important phases of the intelligence process. Lie detection and veracity assessment methods have been studied widely but a comprehensive analysis of these methods’ applicability is lacking. There are some problems related to the efficacy of lie detection and veracity assessment. According to a conventional belief an almighty lie detection method, that is almost 100% accurate and suitable for any social encounter, exists. However, scientific studies have shown that this is not the case, and popular approaches are often over simplified. The main research question of this study was: What is the applicability of veracity assessment methods, which are reliable and are based on scientific proof, in terms of the following criteria? o Accuracy, i.e. probability of detecting deception successfully o Ease of Use, i.e. easiness to apply the method correctly o Time Required to apply the method reliably o No Need for Special Equipment o Unobtrusiveness of the method In order to get an answer to the main research question, the following supporting research questions were answered first: What kinds of interviewing and interrogation techniques exist and how could they be used in the intelligence interview context, what kinds of lie detection and veracity assessment methods exist that are reliable and are based on scientific proof and what kind of uncertainty and other limitations are included in these methods? Two major databases, Google Scholar and Science Direct, were used to search and collect existing topic related studies and other papers. After the search phase, the understanding of the existing lie detection and veracity assessment methods was established through a meta-analysis. Multi Criteria Analysis utilizing Analytic Hierarchy Process was conducted to compare scientifically valid lie detection and veracity assessment methods in terms of the assessment criteria. In addition, a field study was arranged to get a firsthand experience of the applicability of different lie detection and veracity assessment methods. The Studied Features of Discourse and the Studied Features of Nonverbal Communication gained the highest ranking in overall applicability. They were assessed to be the easiest and fastest to apply, and to have required temporal and contextual sensitivity. The Plausibility and Inner Logic of the Statement, the Method for Assessing the Credibility of Evidence and the Criteria Based Content Analysis were also found to be useful, but with some limitations. The Discourse Analysis and the Polygraph were assessed to be the least applicable. Results from the field study support these findings. However, it was also discovered that the most applicable methods are not entirely troublefree either. In addition, this study highlighted that three channels of information, Content, Discourse and Nonverbal Communication, can be subjected to veracity assessment methods that are scientifically defensible. There is at least one reliable and applicable veracity assessment method for each of the three channels. All of the methods require disciplined application and a scientific working approach. There are no quick gains if high accuracy and reliability is desired. Since most of the current lie detection studies are concentrated around a scenario, where roughly half of the assessed people are totally truthful and the other half are liars who present a well prepared cover story, it is proposed that in future studies lie detection and veracity assessment methods are tested against partially truthful human sources. This kind of test setup would highlight new challenges and opportunities for the use of existing and widely studied lie detection methods, as well as for the modern ones that are still under development.
Resumo:
This research is a self-study into my life as an athlete, elementary school teacher, leamer, and as a teacher educator/academic. Throughout the inquiry, I explore how my beliefs and values infused my lived experiences and ultimately influenced my constructivist, humanist, and ultimately my holistic teaching and learning practice which at times disrupted the status quo. I have written a collection of narratives (data generation) which embodied my identity as an unintelligent student/leamer, a teacher/learner, an experiential learner, a tenacious participant, and a change agent to name a few. As I unpack my stories and hermeneutically reconstruct their intent, I question their meaning as I explore how I can improve my teaching and learning practice and potentially effect positive change when instructing beginning teacher candidates at a Faculty of Education. At the outset I situate my story and provide the necessary political, social, and cultural background information to ground my research. I follow this with an in depth look at the elements that interconnect the theoretical framework of this self-study by presenting the notion of writing at the boundaries through auto ethnography (Ellis, 2000; Ellis & Bochner, 2004) and writing as a method of inquiry (Richardson, 2000). The emergent themes of experiential learning, identity, and embodied knowing surfaced during the data generation phase. I use the Probyn' s (1990) .. metaphor of locatedness to unpack these themes and ponder the question, Where is experience located? I deepen the exploration by layering Drake's (2007) KnowlDo/Be framework alongside locatedness and offer descriptions of learning moments grounded in pedagogical theories. In the final phase, I introduce thirdspace theory (Bhabha, 1994; Soja, 1996) as a space that allowed me to puzzle educational dilemmas and begin to reconcile the binaries that existed in my life both personally, and professionally. I end where I began by revisiting the questions that drove this study. In addition, Ireflect upon the writing process and the challenges that I encountered while immersed in this approach and contemplate the relevance of conducting a self-study. I leave the reader with what is waiting for me on the other side of the gate, for as Henry James suggested, "Experience is never limited, and it is never complete."
Resumo:
Research Question: What are the psychosocial factors that affect causality assessment in early phase oncology clinical trials? Methods: Thirty-two qualitative interviews were explicated with the aid of “Naturalistic Decision Making”. Data explication consisted of phenomenological reduction, delineating and clustering meaning units, forming themes, and creating a composite summary. Participants were members of the National Cancer Institute of Canada’s Clinical Trial Group Investigative New Drug committee. Results: The process of assigning causality is extremely subjective and full of uncertainty. Physicians had no formal training, nor a tool to assist them with this process. Physicians were apprehensive about their decisions and felt pressure from their patients, as well as the pharmaceutical companies sponsoring the trial. Conclusions: There are many problem areas when attributing causality, all of which have serious consequences, but clinicians used a variety of methods to cope with these problem areas.
Resumo:
A significant number of adults in adult literacy programs in Ontario have specific learning difficulties. This study sought to examine the holistic factors that contributed to these learners achieving their goals. Through a case study design, the data revealed that a combination of specific learning methods and strategies, along with particular characteristics of the instructor, participant, and class, and the evidence of self-transformation all seemed to contribute to the participant's success in the program. Instructor-directed teaching and cooperative learning were the main learning methods used in the class. General learning strategies employed were the use of core curriculum and authentic documents, and using phonics, repetition, assistive resources, and using activities that appealed to various learning styles. The instructor had a history of both professional development in the area of learning disabilities as well as experience working with learners who had specific learning difficulties. There also seemed to be a goodness of fit between the participant and the instructor. Several characteristics of the participant seemed to aid in his success: his positive self-esteem, self-advocacy skills, self-determination, self-awareness, and the fact that he enjoyed learning. The size (3-5 people) and type of class (small group) also seemed to have an impact. Finally, evidence that the participant went through a self-transformation seemed to contribute to a positive learner identity. These results have implications for practice, theory, and further research in adult education.
Resumo:
The Dudding group is interested in the application of Density Functional Theory (DFT) in developing asymmetric methodologies, and thus the focus of this dissertation will be on the integration of these approaches. Several interrelated subsets of computer aided design and implementation in catalysis have been addressed during the course of these studies. The first of the aims rested upon the advancement of methodologies for the synthesis of biological active C(1)-chiral 3-methylene-indan-1-ols, which in practice lead to the use of a sequential asymmetric Yamamoto-Sakurai-Hosomi allylation/Mizoroki Heck reaction sequence. An important aspect of this work was the utilization of ortho-substituted arylaldehyde reagents which are known to be a problematic class of substrates for existing asymmetric allylation approaches. The second phase of my research program lead to the further development of asymmetric allylation methods using o-arylaldehyde substrates for synthesis of chiral C(3)-substituted phthalides. Apart from the de novo design of these chemistries in silico, which notably utilized water-tolerant, inexpensive, and relatively environmental benign indium metal, this work represented the first computational study of a stereoselective indium-mediated process. Following from these discoveries was the advent of a related, yet catalytic, Ag(I)-catalyzed approach for preparing C(3)-substituted phthalides that from a practical standpoint was complementary in many ways. Not only did this new methodology build upon my earlier work with the integrated (experimental/computational) use of the Ag(I)-catalyzed asymmetric methods in synthesis, it provided fundamental insight arrived at through DFT calculations, regarding the Yamamoto-Sakurai-Hosomi allylation. The development of ligands for unprecedented asymmetric Lewis base catalysis, especially asymmetric allylations using silver and indium metals, followed as a natural extension from these earlier discoveries. To this end, forthcoming as well was the advancement of a family of disubstituted (N-cyclopropenium guanidine/N-imidazoliumyl substituted cyclopropenylimine) nitrogen adducts that has provided fundamental insight into chemical bonding and offered an unprecedented class of phase transfer catalysts (PTC) having far-reaching potential. Salient features of these disubstituted nitrogen species is unprecedented finding of a cyclopropenium based C-H•••πaryl interaction, as well, the presence of a highly dissociated anion projected them to serve as a catalyst promoting fluorination reactions. Attracted by the timely development of these disubstituted nitrogen adducts my last studies as a PhD scholar has addressed the utility of one of the synthesized disubstituted nitrogen adducts as a valuable catalyst for benzylation of the Schiff base N-diphenyl methylene glycine ethyl ester. Additionally, the catalyst was applied for benzylic fluorination, emerging from this exploration was successful fluorination of benzyl bromide and its derivatives in high yields. A notable feature of this protocol is column-free purification of the product and recovery of the catalyst to use in a further reaction sequence.
Resumo:
Les progrès spectaculaires réalisés dans le traitement de la fibrose kystique font en sorte qu'un nombre de plus en plus élevé de familles comptant un adolescent atteint de ce problème de santé, peut maintenant envisager d'effectuer la transition depuis l'adolescent vers l'âge adulte. À ce jour, les recherches à ce sujet demeurent peu nombreuses, principalement en ce qui a trait à la nature des interations entre les adolescents, leurs parents et les professionnels de la santé qui préparent ces familles en vue du transfert, depuis un établissement pédiatrique vers un établissement adulte. L'appréhension de ce phénomène s'est réalisée dans une perspective constructiviste et systémique, ce qui a permis de mettre en relief certaines des multiples facettes du phénomène et de jeter un éclairage novateur sur des dynamiques entre les membres de la famille et entre ces derniers et les professionnels de la santé. Le but de cette étude de cas, de type qualitavive, était de modéliser de manière systémique, le processus de transtion pour des familles ayant un adolescent atteint de la fibrose kystique qui est en phase pré transfert, depuis l'établissement pédiatrique vers le milieu adulte. Des entretiens semi-dirigés ont été réalisés avec sept familles comptant un adolescent atteint de la fibrose kystique. De plus, un entretien de groupe a également été effectué avec une équipe interprofessionnelle oeuvrant dans une clinique qui traite des adolescents atteints de la fibrose kysitque. L'analyse qualitative des données a mené au développement d'un modèle systémique de la transition pour ces familles. Ce modèle souligne qu'en évoluant de façon parallèle, les familles et les professionnels de la santé poursuivent la même finalité de favoriser le développement de l'autonomie de l'adolescent. Ce processus de transition chez ces familles s'inscrit, par ailleurs, dans un espace-temps signifié par le transfert inter-établissements, avec peu de considération pour la souffrance parentale liée au pronostic de la maladie. Ainsi, le développement de l'autonomie est marqué par la confiance qui doit s'établir entre l'adolescent et son parent, en passant par la surveillance du parent envers l'adolescent et par la responsabilisation graduelle chez ce dernier. Cette étude propose donc une modélisation systémique de ce processus de transition qui contribue non seulement au développement du concept de la transition en sciences de la santé, mais aussi de la pratique clinique et de la recherche dans le domaine des soins à la famille aux prises avec un adolescent atteint de la fibrose kystique.