868 resultados para High technology industries
Resumo:
The rotational speed of high-speed electric machines is over 15 000 rpm. These machines are compact in size when compared to the power rate. As a consequence, the heat fluxes are at a high level and the adequacy of cooling becomes an important design criterion. In the high-speed machines, the air gap between the stator and rotor is a narrow flow channel. The cooling air is produced with a fan and the flow is then directed to the air gap. The flow in the gap does not provide sufficient cooling for the stator end windings, and therefore additional cooling is required. This study investigates the heat transfer and flow fields around the coil end windings when cooling jets are used. As a result, an innovative and new assembly is introduced for the cooling jets, with the benefits of a reduced amount of hot spots, a lower pressure drop, and hence a lower power need for the cooling fan. The gained information can also be applied to improve the cooling of electric machines through geometry modifications. The objective of the research is to determine the locations of the hot spots and to find out induced pressure losses with different jet alternatives. Several possibilities to arrange the extra cooling are considered. In the suggested approach cooling is provided by using a row of air jets. The air jets have three main tasks: to cool the coils effectively by direct impingement jets, to increase and cool down the flow that enters the coil end space through the air gap, and to ensure the correct distribution of the flow by forming an air curtain with additional jets. One important aim of this study is the arrangement of cooling jets in such manner that hot spots can be avoided to wide extent. This enables higher power density in high-speed motors. This cooling system can also be applied to the ordinary electric machines when efficient cooling is needed. The numerical calculations have been performed using a commercial Computational Fluid Dynamics software. Two geometries have been generated: cylindrical for the studied machine and Cartesian for the experimental model. The main parameters include the positions, arrangements and number of jets, the jet diameters, and the jet velocities. The investigated cases have been tested with two widely used turbulence models and using a computational grid of over 500 000 cells. The experimental tests have been made by using a simplified model for the end winding space with cooling jets. In the experiments, an emphasis has been given to flow visualisation. The computational analysis shows good agreement with the experimental results. Modelling of the cooling jet arrangement enables also a better understanding of the complex system of heat transfer at end winding space.
Resumo:
Granular flow phenomena are frequently encountered in the design of process and industrial plants in the traditional fields of the chemical, nuclear and oil industries as well as in other activities such as food and materials handling. Multi-phase flow is one important branch of the granular flow. Granular materials have unusual kinds of behavior compared to normal materials, either solids or fluids. Although some of the characteristics are still not well-known yet, one thing is confirmed: the particle-particle interaction plays a key role in the dynamics of granular materials, especially for dense granular materials. At the beginning of this thesis, detailed illustration of developing two models for describing the interaction based on the results of finite-element simulation, dimension analysis and numerical simulation is presented. The first model is used to describing the normal collision of viscoelastic particles. Based on some existent models, more parameters are added to this model, which make the model predict the experimental results more accurately. The second model is used for oblique collision, which include the effects from tangential velocity, angular velocity and surface friction based on Coulomb's law. The theoretical predictions of this model are in agreement with those by finite-element simulation. I n the latter chapters of this thesis, the models are used to predict industrial granular flow and the agreement between the simulations and experiments also shows the validation of the new model. The first case presents the simulation of granular flow passing over a circular obstacle. The simulations successfully predict the existence of a parabolic steady layer and show how the characteristics of the particles, such as coefficients of restitution and surface friction affect the separation results. The second case is a spinning container filled with granular material. Employing the previous models, the simulation could also reproduce experimentally observed phenomena, such as a depression in the center of a high frequency rotation. The third application is about gas-solid mixed flow in a vertically vibrated device. Gas phase motion is added to coherence with the particle motion. The governing equations of the gas phase are solved by using the Large eddy simulation (LES) and particle motion is predicted by using the Lagrangian method. The simulation predicted some pattern formation reported by experiment.
Resumo:
The antibody display technology (ADT) such as phage display (PD) has substantially improved the production of monoclonal antibodies (mAbs) and Ab fragments through bypassing several limitations associated with the traditional approach of hybridoma technology. In the current study, we capitalized on the PD technology to produce high affinity single chain variable fragment (scFv) against tumor necrosis factor-alpha (TNF- α), which is a potent pro-inflammatory cytokine and plays important role in various inflammatory diseases and malignancies. To pursue production of scFv antibody fragments against human TNF- α, we performed five rounds of biopanning using stepwise decreased amount of TNF-α (1 to 0.1 μ g), a semi-synthetic phage antibody library (Tomlinson I + J) and TG1 cells. Antibody clones were isolated and selected through enzyme-linked immunosorbent assay (ELISA) screening. The selected scFv antibody fragments were further characterized by means of ELISA, PCR, restriction fragment length polymorphism (RFLP) and Western blot analyses as well as fluorescence microscopy and flow cytometry. Based upon binding affinity to TNF-α , 15 clones were selected out of 50 positive clones enriched from PD in vitro selection. The selected scFvs displayed high specificity and binding affinity with Kd values at nm range to human TNF-α . The immunofluorescence analysis revealed significant binding of the selected scFv antibody fragments to the Raji B lymphoblasts. The effectiveness of the selected scFv fragments was further validated by flow cytometry analysis in the lipopolysaccharide (LPS) treated mouse fibroblast L929 cells. Based upon these findings, we propose the selected fully human anti-TNF-α scFv antibody fragments as potential immunotherapy agents that may be translated into preclinical/clinical applications.
Resumo:
VVALOSADE is a research project of professor Anita Lukka's VALORE research team in the Lappeenranta University of Technology. The VALOSADE includes the ELO technology program of Tekes. SMILE is one of four subprojects of the VALOSADE. The SMILE study focuses on the case of the company network that is composed of small and micro-sized mechanical maintenance service providers and forest industry as large-scale customers. The basic principle of the SMILE study is the communication and ebusiness in supply and demand networks. The aim of the study is to develop ebusiness strategy, ebusiness model and e-processes among the SME local service providers, and onthe other hand, between the local service provider network and the forest industry customers in a maintenance and operations service business. A literature review, interviews and benchmarking are used as research methods in this qualitative case study. The first SMILE report, 'Ebusiness between Global Company and Its Local SME Supplier Network', concentrated on creating background for the SMILE study by studying general trends of ebusiness in supply chains and networks of different industries. This second phase of the study concentrates on case network background, such as business relationships, information systems and business objectives; core processes in maintenance and operations service network; development needs in communication among the network participants; and ICT solutions to respond needs in changing environment. In the theory part of the report, different ebusiness models and frameworks are introduced. Those models and frameworks are compared to empirical case data. From that analysis of the empirical data, therecommendations for the development of the network information system are derived. In process industry such as the forest industry, it is crucial to achieve a high level of operational efficiency and reliability, which sets up great requirements for maintenance and operations. Therefore, partnerships or strategic alliances are needed between the network participants. In partnerships and alliances, deep communication is important, and therefore the information systems in the network also are critical. Communication, coordination and collaboration will increase in the case network in the future, because network resources must be optimised to improve competitive capability of the forest industry customers and theefficiency of their service providers. At present, ebusiness systems are not usual in this maintenance network. A network information system among the forest industry customers and their local service providers actually is the only genuinenetwork information system in this total network. However, the utilisation of that system has been quite insignificant. The current system does not add value enough either to the customers or to the local service providers. At present, thenetwork information system is the infomediary that share static information forthe network partners. The network information system should be the transaction intermediary, which integrates internal processes of the network companies; the network information system, which provides common standardised processes for thelocal service providers; and the infomediary, which share static and dynamic information on right time, on right partner, on right costs, on right format and on right quality. This study provides recommendations how to develop this system in the future to add value to the network companies. Ebusiness scenarios, vision, objectives, strategies, application architecture, ebusiness model, core processes and development strategy must be considered when the network information system will be developed in the next development step. The core processes in the case network are demand/capacity management, customer/supplier relationship management, service delivery management, knowledge management and cash flow management. Most benefits from ebusiness solutions come from the electrifying of operational level processes, such as service delivery management and cash flow management.
Resumo:
Even though the research on innovation in services has expanded remarkably especially during the past two decades, there is still a need to increase understanding on the special characteristics of service innovation. In addition to studying innovation in service companies and industries, research has also recently focused more on services in innovation, as especially the significance of so-called knowledge intensive business services (KIBS) for the competitive edge of their clients, othercompanies, regions and even nations has been proved in several previous studies. This study focuses on studying technology-based KIBS firms, and technology andengineering consulting (TEC) sector in particular. These firms have multiple roles in innovation systems, and thus, there is also a need for in-depth studies that increase knowledge about the types and dimensions of service innovations as well as underlying mechanisms and procedures which make the innovations successful. The main aim of this study is to generate new knowledge in the fragmented research field of service innovation management by recognizing the different typesof innovations in TEC services and some of the enablers of and barriers to innovation capacity in the field, especially from the knowledge management perspective. The study also aims to shed light on some of the existing routines and new constructions needed for enhancing service innovation and knowledge processing activities in KIBS companies of the TEC sector. The main samples of data in this research include literature reviews and public data sources, and a qualitative research approach with exploratory case studies conducted with the help of the interviews at technology consulting companies in Singapore in 2006. These complement the qualitative interview data gathered previously in Finland during a larger research project in the years 2004-2005. The data is also supplemented by a survey conducted in Singapore. The respondents for the survey by Tan (2007) were technology consulting companies who operate in the Singapore region. The purpose ofthe quantitative part of the study was to validate and further examine specificaspects such as the influence of knowledge management activities on innovativeness and different types of service innovations, in which the technology consultancies are involved. Singapore is known as a South-east Asian knowledge hub and is thus a significant research area where several multinational knowledge-intensive service firms operate. Typically, the service innovations identified in the studied TEC firms were formed by several dimensions of innovations. In addition to technological aspects, innovations were, for instance, related to new client interfaces and service delivery processes. The main enablers of and barriers to innovation seem to be partly similar in Singaporean firms as compared to the earlier study of Finnish TEC firms. Empirical studies also brought forth the significance of various sources of knowledge and knowledge processing activities as themain driving forces of service innovation in technology-related KIBS firms. A framework was also developed to study the effect of knowledge processing capabilities as well as some moderators on the innovativeness of TEC firms. Especially efficient knowledge acquisition and environmental dynamism seem to influence the innovativeness of TEC firms positively. The results of the study also contributeto the present service innovation literature by focusing more on 'innovation within KIBs' rather than 'innovation through KIBS', which has been the typical viewpoint stressed in the previous literature. Additionally, the study provides several possibilities for further research.
Resumo:
The building industry has a particular interest in using clinching as a joining method for frame constructions of light-frame housing. Normally many clinch joints are required in joining of frames.In order to maximise the strength of the complete assembly, each clinch joint must be as sound as possible. Experimental testing is the main means of optimising a particular clinch joint. This includes shear strength testing and visual observation of joint cross-sections. The manufacturers of clinching equipment normally perform such experimental trials. Finite element analysis can also be used to optimise the tool geometry and the process parameter, X, which represents the thickness of the base of the joint. However, such procedures require dedicated software, a skilled operator, and test specimens in order to verify the finite element model. In addition, when using current technology several hours' computing time may be necessary. The objective of the study was to develop a simple calculation procedure for rapidly establishing an optimum value for the parameter X for a given tool combination. It should be possible to use the procedure on a daily basis, without stringent demands on the skill of the operator or the equipment. It is also desirable that the procedure would significantly decrease thenumber of shear strength tests required for verification. The experimental workinvolved tests in order to obtain an understanding of the behaviour of the sheets during clinching. The most notable observation concerned the stage of the process in which the upper sheet was initially bent, after which the deformation mechanism changed to shearing and elongation. The amount of deformation was measured relative to the original location of the upper sheet, and characterised as the C-measure. By understanding in detail the behaviour of the upper sheet, it waspossible to estimate a bending line function for the surface of the upper sheet. A procedure was developed, which makes it possible to estimate the process parameter X for each tool combination with a fixed die. The procedure is based on equating the volume of material on the punch side with the volume of the die. Detailed information concerning the behaviour of material on the punch side is required, assuming that the volume of die does not change during the process. The procedure was applied to shear strength testing of a sample material. The sample material was continuously hot-dip zinc-coated high-strength constructional steel,with a nominal thickness of 1.0 mm. The minimum Rp0.2 proof stress was 637 N/mm2. Such material has not yet been used extensively in light-frame housing, and little has been published on clinching of the material. The performance of the material is therefore of particular interest. Companies that use clinching on a daily basis stand to gain the greatest benefit from the procedure. By understanding the behaviour of sheets in different cases, it is possible to use data at an early stage for adjusting and optimising the process. In particular, the functionality of common tools can be increased since it is possible to characterise the complete range of existing tools. The study increases and broadens the amount ofbasic information concerning the clinching process. New approaches and points of view are presented and used for generating new knowledge.
Resumo:
The objective of the dissertation is to increase understanding and knowledge in the field where group decision support system (GDSS) and technology selection research overlap in the strategic sense. The purpose is to develop pragmatic, unique and competent management practices and processes for strategic technology assessment and selection from the whole company's point of view. The combination of the GDSS and technology selection is approached from the points of view of the core competence concept, the lead user -method, and different technology types. In this research the aim is to find out how the GDSS contributes to the technology selection process, what aspects should be considered when selecting technologies to be developed or acquired, and what advantages and restrictions the GDSS has in the selection processes. These research objectives are discussed on the basis of experiences and findings in real life selection meetings. The research has been mainly carried outwith constructive, case study research methods. The study contributes novel ideas to the present knowledge and prior literature on the GDSS and technology selection arena. Academic and pragmatic research has been conducted in four areas: 1) the potential benefits of the group support system with the lead user -method,where the need assessment process is positioned as information gathering for the selection of wireless technology development projects; 2) integrated technology selection and core competencies management processes both in theory and in practice; 3) potential benefits of the group decision support system in the technology selection processes of different technology types; and 4) linkages between technology selection and R&D project selection in innovative product development networks. New type of knowledge and understanding has been created on the practical utilization of the GDSS in technology selection decisions. The study demonstrates that technology selection requires close cooperation between differentdepartments, functions, and strategic business units in order to gather the best knowledge for the decision making. The GDSS is proved to be an effective way to promote communication and co-operation between the selectors. The constructs developed in this study have been tested in many industry fields, for example in information and communication, forest, telecommunication, metal, software, and miscellaneous industries, as well as in non-profit organizations. The pragmatic results in these organizations are some of the most relevant proofs that confirm the scientific contribution of the study, according to the principles of the constructive research approach.
Resumo:
Information Technology (IT) outsourcing has traditionally been seen as a means to acquire newresources and competencies to perform standard tasks at lowered cost. This dissertationchallenges the thought that outsourcing should be limited to non-strategic systems andcomponents, and presents ways to maximize outsourcing enabled benefits while minimizingassociated risks. In this dissertation IT outsourcing is approached as an efficiency improvement and valuecreationprocess rather than a sourcing decision. The study focuses on when and how tooutsource information technology, and presents a new set of critical success factors foroutsourcing project management. In a case study it re-validates the theory-based propositionthat in certain cases and situations it is beneficial to partly outsource also strategic IT systems. The main contribution of this dissertation is the validation of proposal that in companies wherethe level of IT competency is high, managerial support established and planning processes welldefined,it is possible to safely outsource also business critical IT systems. A model describing the critical success factors in such cases is presented based on existing knowledge on the fieldand the results of empirical study. This model further highlights the essence of aligning IT andbusiness strategies, assuming long-term focus on partnering, and the overall target ofoutsourcing to add to the strengths of the company rather than eliminating weaknesses.
Resumo:
This work was carried out in the laboratory of Fluid Dynamics, at Lappeenranta University of Technology during the years 1991-1996. The research was a part of larger high speed technology development research. First, there was the idea of making high speed machinery applications with the Brayton cycle. There was a clear need to deepen theknowledge of the cycle itself and to make a new approach in the field of the research. Also, the removal of water from the humid air seemed very interesting. The goal of this work was to study methods of designing high speed machinery to the reversed Brayton cycle, from theoretical principles to practical applications. The reversed Brayton cycle can be employed as an air dryer, a heat pump or a refrigerating machine. In this research the use of humid air as a working fluid has an environmental advantage, as well. A new calculation method for the Braytoncycle is developed. In this method especially the expansion process in the turbine is important because of the condensation of the water vapour in the humid air. This physical phenomena can have significant effects on the level of performance of the application. Also, the influence of calculating the process with actual, achievable process equipment efficiencies is essential for the development of the future machinery. The above theoretical calculations are confirmed with two different laboratory prototypes. The high speed machinery concept allows one to build an application with only one rotating shaft including all the major parts: the high speed motor, the compressor and the turbine wheel. The use of oil free bearings and high rotational speed outlines give several advantages compared to conventional machineries: light weight, compact structure, safe operation andhigher efficiency at a large operational region. There are always problems whentheory is applied to practice. The calibrations of pressure, temperature and humidity probes were made with care but still measurable errors were not negligible. Several different separators were examined and in all cases the content of the separated water was not exact. Due to the compact sizes and structures of the prototypes, the process measurement was slightly difficult. The experimental results agree well with the theoretical calculations. These experiments prove the operation of the process and lay a ground for the further development. The results of this work give very promising possibilities for the design of new, commercially competitive applications that use high speed machinery and the reversed Brayton cycle.
Resumo:
Työssä tutkittiin uutta teknologiaa pigmenttipäällystykseen. Tämä tekniikka on yleisesti tunnettua eräillä muilla teollisuudenaloilla. Kirjallisuustutkimuksessa on esitelty prosessia ja sen eri osatekijöitä sekä muilla aloilla tunnettuja prosessimuuttujia. Päällystyspastojen ja päällystettävien pintojen teoriaa on selvitetty uuden tekniikan ja pigmenttipäällystyksen valossa. Uuden tekniikan perusmekanismeja tutkittiin kokeellisessa osassa. Valuvan nestefilmin stabiilisuutta tutkittiin minimivirtauksen avulla. Stabiilisuustutkimuksen suorittamiseen käytettiin apuna Taguchi-matriisia DOE-ohjelmalla (Design of Experiments). Kokeiden perusteella minimivirtauksen kannalta päällystyspastalle edullisempi koostumus on kalsiumkarbonaatti- kuin kaoliinipasta. Sideaineella on pienempi osuus lateksia ja polyvinyylialkoholia parempi. Suurempi osuus pinta-aktiivista ainetta ja matala pastan kuiva-ainepitoisuus ovat suositeltuja. Tehokas ilmanpoisto päällystyspastasta on myös tärkeää lopullisen tuloksen kannalta. Koekoneella ajetuissa päällystyskokeissa havaittiin valuvan filmin ominaisuuksien tärkeys. Pienetkin kaasumäärät päällystyspastassa häiritsivät lopullisen päällysteen laatua. Päällystyspastan ilmanpoisto on avainasemassa erityisesti kun päällystetään suurella nopeudella pieniä päällystemääriä. Koeajoissa havaittiin kaikki kirjallisuudessa esitellyt rajoittavat tekijät. Kokeissa päällystettiin 400-1600 m/min nopeudella 5-20 g/m² päällystemääriä. Olosuhteet stabiilille nestefilmille vaativat edelleen kehitystä suurella nopeudella päällystettäessä. Päällysteen eroavaisuuksia verrattiin teräpäällystysmenetelmiin. Terä-päällystyksellä saadaan sileä mutta epätasaisesti peittävä pinta kun taas uuden tekniikan päällyste mukailee päällystettävän alustan topografiaa. Tasapaksun päällysteen etuna on hyvä peittävyys jo pienellä päällystemäärällä.
Resumo:
In Brazil, pear production presents the same incipient situation over the last 15 years, due mostly to low production technology. In this context, this study aimed to evaluate the development, growth and production of the pear tree cultivars Cascatense, Tenra and Hosui grafted on 'CPP' quince rootstock, using 'FT' pear as interstem. This trial was carried out in Guarapuava, State of Paraná, Southern region of Brazil, by five productive cycles. The pear trees were planted in September of 2004, spaced at 1.0 x 4.0 m (2,500 trees ha-1), trained to the modified central leader, on a Four-wire trellis, with drip irrigation and cultivated under organic production system. The following variables were evaluated: sprouting, anthesis, yield, fruit weight, soluble solids content, titratable acidity, pulp firmness, canopy area per plant and per hectare and trunk diameter. The pear tree cv. Tenra was outstanding most of the years for fruit yield, and, consequently, showed the highest accumulated yield over the period (51.6 t ha-1), followed by the cultivars Cascatense (39.7 t ha-1) and Hosui (18.7 t ha-1). All pear cultivars presented suitable physical-chemical characteristics for commercial purposes, with minimal average soluble solids content of 11% at harvest. The maximum canopy area per hectare was attained for cv. Cascatense (3063.2 m²), that was considered insufficient for a high yield. These results suggest the needs for studies with higher density planting and other training systems, searching optimize canopy volume. One of the most limiting factors in the organic pear orchard was the incidence of pear dieback caused by Botriosphaeria dothidea, severe more often in pear trees cv. Hosui.
Resumo:
Viime vuosien nopea kehitys on kiihdyttänyt uusien lääkkeiden kehittämisprosessia. Kombinatorinen kemia on tehnyt mahdolliseksi syntetisoida suuria kokoelmia rakenteeltaan toisistaan poikkeavia molekyylejä, nk. kombinatorisia kirjastoja, biologista seulontaa varten. Siinä molekyylien rakenteeseen liittyvä aktiivisuus tutkitaan useilla erilaisilla biologisilla testeillä mahdollisten "osumien" löytämiseksi, joista osasta saatetaan myöhemmin kehittää uusia lääkeaineita. Jotta biologisten tutkimusten tulokset olisivat luotettavia, on syntetisoitujen komponenttien oltava mahdollisimman puhtaita. Tämän vuoksi tarvitaan HTP-puhdistusta korkealaatuisten komponenttien ja luotettavan biologisen tiedon takaamiseksi. Jatkuvasti kasvavat tuotantovaatimukset ovat johtaneet näiden puhdistustekniikoiden automatisointiin ja rinnakkaistamiseen. Preparatiivinen LC/MS soveltuu kombinatoristen kirjastojen nopeaan ja tehokkaaseen puhdistamiseen. Monet tekijät, esimerkiksi erotuskolonnin ominaisuudet sekä virtausgradientti, vaikuttavat preparatiivisen LC/MS puhdistusprosessin tehokkuuteen. Nämä parametrit on optimoitava parhaan tuloksen saamiseksi. Tässä työssä tutkittiin emäksisiä komponentteja erilaisissa virtausolosuhteissa. Menetelmä kombinatoristen kirjastojen puhtaustason määrittämiseksi LC/MS-puhdistuksen jälkeen optimoitiin ja määritettiin puhtaus joillekin komponenteille eri kirjastoista ennen puhdistusta.
Resumo:
The avidity of the T-cell receptor (TCR) for antigenic peptides presented by the peptide-MHC (pMHC) on cells is a key parameter for cell-mediated immunity. Yet a fundamental feature of most tumor antigen-specific CD8(+) T cells is that this avidity is low. In this study, we addressed the need to identify and select tumor-specific CD8(+) T cells of highest avidity, which are of the greatest interest for adoptive cell therapy in patients with cancer. To identify these rare cells, we developed a peptide-MHC multimer technology, which uses reversible Ni(2+)-nitrilotriacetic acid histidine tags (NTAmers). NTAmers are highly stable but upon imidazole addition, they decay rapidly to pMHC monomers, allowing flow-cytometric-based measurements of monomeric TCR-pMHC dissociation rates of living CD8(+) T cells on a wide avidity spectrum. We documented strong correlations between NTAmer kinetic results and those obtained by surface plasmon resonance. Using NTAmers that were deficient for CD8 binding to pMHC, we found that CD8 itself stabilized the TCR-pMHC complex, prolonging the dissociation half-life several fold. Notably, our NTAmer technology accurately predicted the function of large panels of tumor-specific T cells that were isolated prospectively from patients with cancer. Overall, our results demonstrated that NTAmers are effective tools to isolate rare high-avidity cytotoxic T cells from patients for use in adoptive therapies for cancer treatment.
Resumo:
Suunniteltiin ja rakennettiin suoraa vääntömomenttisäätöä soveltava taajuudenmuuttajakäyttö oikosulkumoottorin ohjaukseen korvaamaan passiivinen jarrukäyttö. Laite on kuntoutuslaite, jolla tehdään lihasvoiman mittauksia ja voimaharjoituksia. Selvitettiin kaupallisten moottoreiden ja taajuudenmuuttajien suoritusominaisuuksia ja tämän perusteella valittiin käyttöön sopivat laitteet. Työssä esitetään kaksi oikosulkumoottorin ohjaustapaa: vektorisäätö ja suora vääntömomenttisäätö. Merkittävin osa tästä työstä käsittelee - tarkan turvallisuussuunnitelman lisäksi - kuntoutuslaitteen prototyypin komponentteja, kokoamista ja suoritustestien tuloksia.
Resumo:
L'imagerie par résonance magnétique (IRM) peut fournir aux cardiologues des informations diagnostiques importantes sur l'état de la maladie de l'artère coronarienne dans les patients. Le défi majeur pour l'IRM cardiaque est de gérer toutes les sources de mouvement qui peuvent affecter la qualité des images en réduisant l'information diagnostique. Cette thèse a donc comme but de développer des nouvelles techniques d'acquisitions des images IRM, en changeant les techniques de compensation du mouvement, pour en augmenter l'efficacité, la flexibilité, la robustesse et pour obtenir plus d'information sur le tissu et plus d'information temporelle. Les techniques proposées favorisent donc l'avancement de l'imagerie des coronaires dans une direction plus maniable et multi-usage qui peut facilement être transférée dans l'environnement clinique. La première partie de la thèse s'est concentrée sur l'étude du mouvement des artères coronariennes sur des patients en utilisant la techniques d'imagerie standard (rayons x), pour mesurer la précision avec laquelle les artères coronariennes retournent dans la même position battement après battement (repositionnement des coronaires). Nous avons découvert qu'il y a des intervalles dans le cycle cardiaque, tôt dans la systole et à moitié de la diastole, où le repositionnement des coronaires est au minimum. En réponse nous avons développé une nouvelle séquence d'acquisition (T2-post) capable d'acquérir les données aussi tôt dans la systole. Cette séquence a été testée sur des volontaires sains et on a pu constater que la qualité de visualisation des artère coronariennes est égale à celle obtenue avec les techniques standard. De plus, le rapport signal sur bruit fourni par la séquence d'acquisition proposée est supérieur à celui obtenu avec les techniques d'imagerie standard. La deuxième partie de la thèse a exploré un paradigme d'acquisition des images cardiaques complètement nouveau pour l'imagerie du coeur entier. La technique proposée dans ce travail acquiert les données sans arrêt (free-running) au lieu d'être synchronisée avec le mouvement cardiaque. De cette façon, l'efficacité de la séquence d'acquisition est augmentée de manière significative et les images produites représentent le coeur entier dans toutes les phases cardiaques (quatre dimensions, 4D). Par ailleurs, l'auto-navigation de la respiration permet d'effectuer cette acquisition en respiration libre. Cette technologie rend possible de visualiser et évaluer l'anatomie du coeur et de ses vaisseaux ainsi que la fonction cardiaque en quatre dimensions et avec une très haute résolution spatiale et temporelle, sans la nécessité d'injecter un moyen de contraste. Le pas essentiel qui a permis le développement de cette technique est l'utilisation d'une trajectoire d'acquisition radiale 3D basée sur l'angle d'or. Avec cette trajectoire, il est possible d'acquérir continûment les données d'espace k, puis de réordonner les données et choisir les paramètres temporel des images 4D a posteriori. L'acquisition 4D a été aussi couplée avec un algorithme de reconstructions itératif (compressed sensing) qui permet d'augmenter la résolution temporelle tout en augmentant la qualité des images. Grâce aux images 4D, il est possible maintenant de visualiser les artères coronariennes entières dans chaque phase du cycle cardiaque et, avec les mêmes données, de visualiser et mesurer la fonction cardiaque. La qualité des artères coronariennes dans les images 4D est la même que dans les images obtenues avec une acquisition 3D standard, acquise en diastole Par ailleurs, les valeurs de fonction cardiaque mesurées au moyen des images 4D concorde avec les valeurs obtenues avec les images 2D standard. Finalement, dans la dernière partie de la thèse une technique d'acquisition a temps d'écho ultra-court (UTE) a été développée pour la visualisation in vivo des calcifications des artères coronariennes. Des études récentes ont démontré que les acquisitions UTE permettent de visualiser les calcifications dans des plaques athérosclérotiques ex vivo. Cepandent le mouvement du coeur a entravé jusqu'à maintenant l'utilisation des techniques UTE in vivo. Pour résoudre ce problème nous avons développé une séquence d'acquisition UTE avec trajectoire radiale 3D et l'avons testée sur des volontaires. La technique proposée utilise une auto-navigation 3D pour corriger le mouvement respiratoire et est synchronisée avec l'ECG. Trois échos sont acquis pour extraire le signal de la calcification avec des composants au T2 très court tout en permettant de séparer le signal de la graisse depuis le signal de l'eau. Les résultats sont encore préliminaires mais on peut affirmer que la technique développé peut potentiellement montrer les calcifications des artères coronariennes in vivo. En conclusion, ce travail de thèse présente trois nouvelles techniques pour l'IRM du coeur entier capables d'améliorer la visualisation et la caractérisation de la maladie athérosclérotique des coronaires. Ces techniques fournissent des informations anatomiques et fonctionnelles en quatre dimensions et des informations sur la composition du tissu auparavant indisponibles. CORONARY artery magnetic resonance imaging (MRI) has the potential to provide the cardiologist with relevant diagnostic information relative to coronary artery disease of patients. The major challenge of cardiac MRI, though, is dealing with all sources of motions that can corrupt the images affecting the diagnostic information provided. The current thesis, thus, focused on the development of new MRI techniques that change the standard approach to cardiac motion compensation in order to increase the efficiency of cardioavscular MRI, to provide more flexibility and robustness, new temporal information and new tissue information. The proposed approaches help in advancing coronary magnetic resonance angiography (MRA) in the direction of an easy-to-use and multipurpose tool that can be translated to the clinical environment. The first part of the thesis focused on the study of coronary artery motion through gold standard imaging techniques (x-ray angiography) in patients, in order to measure the precision with which the coronary arteries assume the same position beat after beat (coronary artery repositioning). We learned that intervals with minimal coronary artery repositioning occur in peak systole and in mid diastole and we responded with a new pulse sequence (T2~post) that is able to provide peak-systolic imaging. Such a sequence was tested in healthy volunteers and, from the image quality comparison, we learned that the proposed approach provides coronary artery visualization and contrast-to-noise ratio (CNR) comparable with the standard acquisition approach, but with increased signal-to-noise ratio (SNR). The second part of the thesis explored a completely new paradigm for whole- heart cardiovascular MRI. The proposed techniques acquires the data continuously (free-running), instead of being triggered, thus increasing the efficiency of the acquisition and providing four dimensional images of the whole heart, while respiratory self navigation allows for the scan to be performed in free breathing. This enabling technology allows for anatomical and functional evaluation in four dimensions, with high spatial and temporal resolution and without the need for contrast agent injection. The enabling step is the use of a golden-angle based 3D radial trajectory, which allows for a continuous sampling of the k-space and a retrospective selection of the timing parameters of the reconstructed dataset. The free-running 4D acquisition was then combined with a compressed sensing reconstruction algorithm that further increases the temporal resolution of the 4D dataset, while at the same time increasing the overall image quality by removing undersampling artifacts. The obtained 4D images provide visualization of the whole coronary artery tree in each phases of the cardiac cycle and, at the same time, allow for the assessment of the cardiac function with a single free- breathing scan. The quality of the coronary arteries provided by the frames of the free-running 4D acquisition is in line with the one obtained with the standard ECG-triggered one, and the cardiac function evaluation matched the one measured with gold-standard stack of 2D cine approaches. Finally, the last part of the thesis focused on the development of ultrashort echo time (UTE) acquisition scheme for in vivo detection of calcification in the coronary arteries. Recent studies showed that UTE imaging allows for the coronary artery plaque calcification ex vivo, since it is able to detect the short T2 components of the calcification. The heart motion, though, prevented this technique from being applied in vivo. An ECG-triggered self-navigated 3D radial triple- echo UTE acquisition has then been developed and tested in healthy volunteers. The proposed sequence combines a 3D self-navigation approach with a 3D radial UTE acquisition enabling data collection during free breathing. Three echoes are simultaneously acquired to extract the short T2 components of the calcification while a water and fat separation technique allows for proper visualization of the coronary arteries. Even though the results are still preliminary, the proposed sequence showed great potential for the in vivo visualization of coronary artery calcification. In conclusion, the thesis presents three novel MRI approaches aimed at improved characterization and assessment of atherosclerotic coronary artery disease. These approaches provide new anatomical and functional information in four dimensions, and support tissue characterization for coronary artery plaques.