919 resultados para Three-phase Integrated Inverter
Resumo:
High systemic levels of IP-10 at onset of combination therapy for chronic hepatitis C mirror intrahepatic mRNA levels and predict a slower first phase decline in HCV RNA as well as poor outcome. Recently several genome wide association studies have revealed that single nucleotide polymorphisms (SNPs) on chromosome19 within proximity of IL28B predict spontaneous clearance of HCV infection and as therapeutic outcome among patients infected with HCV genotype 1, with three such SNPs being highly predictive: rs12979860, rs12980275, and rs8099917. In the present study, we correlated genetic variations in these SNPs from 253 Caucasian patients with pretreatment plasma levels of IP-10 and HCV RNA throughout therapy within a phase III treatment trial (HCV-DITTO). The favorable genetic variations in all three SNPs (CC, AA, and TT respectively) was significantly associated with lower baseline IP-10 (CC vs. CT/TT at rs12979860: median 189 vs. 258 pg/mL, P=0.02, AA vs. AG/GG at rs12980275: median 189 vs. 258 pg/mL, P=0.01, TT vs. TG/GG at rs8099917: median 224 vs. 288 pg/mL, P=0.04), were significantly less common among HCV genotype 1 infected patients than genotype 2/3 (P<0.0001, P<0.0001, and P=0.01 respectively) and had significantly higher baseline viral load than carriers of the SNP genotypes (6.3 vs. 5.9 log 10 IU/mL, P=0.0012, 6.3 vs. 6.0 log 10 IU/mL, P=0.026, and 6.3 vs. 5.8 log 10 IU/mL, P=0.0003 respectively). Among HCV genotype 1 infected homozygous or heterogeneous carriers of the favorable C, A, and T genotypes, lower baseline IP-10 was significantly associated with greater decline in HCV-RNA day 0-4, which translated into increased rates of achieving SVR among homozygous patients with baseline IP-10 below 150 pg/mL (85%, 75%, and 75% respectively). In a multivariate analysis among genotype 1 infected patients, both baseline IP-10 and the SNPs were significant independent predictors of SVR. Conclusion: Baseline plasma IP-10 is significantly associated with IL28B variations, and augments the predictiveness of the first phase decline in HCV RNA and final treatment outcome.
Resumo:
This phase of the research project involved two major efforts: (1) Complete the implementation of AEC-Sync (formerly known as Attolist) on the Iowa Falls Arch Bridge project and (2) develop a web-based project management system (WPMS) for projects under $10 million. For the first major effort, AEC-Sync was provided for the Iowa Department of Transportation (DOT) in a software as a service agreement, allowing the Iowa DOT to rapidly implement the solution with modest effort. During the 2010 fiscal year, the research team was able to help with the implementation process for the solution. The research team also collected feedback from the Broadway Viaduct project team members before the start of the project and implementation of the solution. For the 2011 fiscal year, the research team collected the post-project surveys from the Broadway Viaduct project members and compared them to the pre-project survey results. The result of the AEC-Sync implementation in the Broadway Viaduct project was a positive one. The project members were satisfied with the performance of AEC-Sync and how it facilitated document management and transparency. In addition, the research team distributed, collected, and analyzed the pre-project surveys for the Iowa Falls Arch Bridge project. During the 2012 fiscal year, the research team analyzed the post-project surveys for the Iowa Falls Arch Bridge project AEC-Sync implementation and found a positive outcome when compared to the pre-project surveys. The second major effort for this project involved the identification and implementation of a WPMS solution for smaller bridge and highway projects. During the 2011 fiscal year, Microsoft SharePoint was selected to be implemented on these smaller highway projects. In this year, workflows for the shop/working drawings for the smaller highway projects specified in Section 1105 of the Iowa DOT Specifications were developed. These workflows will serve as the guide for the development of the SharePoint pages. In order to implement the Microsoft SharePoint pages, the effort of an integrated team proved to be vital because it brought together the expertise required from researchers, programmers, and webpage developers to develop the SharePoint pages.
Resumo:
The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.
Resumo:
To conserve natural resources and energy, the amount of recycled asphalt pavement has been steadily increasing in the construction of asphalt pavements. The objective of this study is to develop quality standards for inclusion of high RAP content. To determine if the higher percentage of RAP materials can be used on Iowa’s state highways, three test sections with target amounts of RAP materials of 30%, 35% and 40% by weight were constructed on Highway 6 in Iowa City. To meet Superpave mix design requirements for mixtures with high RAP contents, it was necessary to fractionate the RAP materials. Three test sections with actual RAP materials of 30.0%, 35.5% and 39.2% by weight were constructed and the average field densities from the cores were measured as 95.3%, 94.0%, and 94.3%, respectively. Field mixtures were compacted in the laboratory to evaluate moisture sensitivity using a Hamburg Wheel Tracking Device. After 20,000 passes, rut depths were less than 3mm for mixtures obtained from three test sections. The binder was extracted from the field mixtures from each test section and tested to identify the effects of RAP materials on the performance grade of the virgin binder. Based on Dynamic Shear Rheometer and Bending Beam Rheometer tests, the virgin binders (PG 64-28) from test sections with 30.0%, 35.5% and 39.2% RAP materials were stiffened to PG 76-22, PG 76-16, and PG 82-16, respectively. The Semi-Circular Bending (SCB) test was performed on laboratory compacted field mixtures with RAP amounts of 30.0%, 35.5% and 39.2% at two different temperatures of -18 and -30 °C. As the test temperature decreased, the fracture energy decreased and the stiffness increased. As the RAP amount increased, the stiffness increased and the fracture energy decreased. Finally, a condition survey of the test sections was conducted to evaluate their short-term pavement performance and the reflective transverse cracking did not increase as RAP amount was increased from 30.0% to 39.2%.
Resumo:
OBJECTIVE: To assess the survival benefit and safety profile of low-dose (850 mg/kg) and high-dose (1350 mg/kg) phospholipid emulsion vs. placebo administered as a continuous 3-day infusion in patients with confirmed or suspected Gram-negative severe sepsis. Preclinical and ex vivo studies show that lipoproteins bind and neutralize endotoxin, and experimental animal studies demonstrate protection from septic death when lipoproteins are administered. Endotoxin neutralization correlates with the amount of phospholipid in the lipoprotein particles. DESIGN: A three-arm, randomized, blinded, placebo-controlled trial. SETTING: Conducted at 235 centers worldwide between September 2004 and April 2006. PATIENTS: A total of 1379 patients participated in the study, 598 patients received low-dose phospholipid emulsion, and 599 patients received placebo. The high-dose phospholipid emulsion arm was stopped, on the recommendation of the Independent Data Monitoring Committee, due to an increase in life-threatening serious adverse events at the fourth interim analysis and included 182 patients. MEASUREMENTS AND MAIN RESULTS: A 28-day all-cause mortality and new-onset organ failure. There was no significant treatment benefit for low- or high-dose phospholipid emulsion vs. placebo for 28-day all-cause mortality, with rates of 25.8% (p = .329), 31.3% (p = .879), and 26.9%, respectively. The rate of new-onset organ failure was not statistically different among groups at 26.3%, 31.3%, 20.4% with low- and high-dose phospholipid emulsion, and placebo, respectively (one-sided p = .992, low vs. placebo; p = .999, high vs. placebo). Of the subjects treated, 45% had microbiologically confirmed Gram-negative infections. Maximal changes in mean hemoglobin levels were reached on day 10 (-1.04 g/dL) and day 5 (-1.36 g/dL) with low- and high-dose phospholipid emulsion, respectively, and on day 14 (-0.82 g/dL) with placebo. CONCLUSIONS: Treatment with phospholipid emulsion did not reduce 28-day all-cause mortality, or reduce the onset of new organ failure in patients with suspected or confirmed Gram-negative severe sepsis.
Resumo:
Questionnaires were sent to transportation agencies in all 50 states in the U.S., to Puerto Rico, and all provinces in Canada asking about their experiences with uplift problems of - corrugated metal pipe (CMP). Responses were received from 52 agencies who reported 9 failures within the last 5 years. Some agencies also provided design standards for tiedowns to resist uplift. There was a wide variety in restraining forces used; for example for a pipe 6 feet in diameter, the resisting force ranged from 10 kips to 66 kips. These responses verified the earlier conclusion based on responses from Iowa county engineers that a potential uplift danger exists.when end restraint is not provided for CMP and that existing designs have an unclear theoretical or experimental basis. In an effort to develop more rational design standards, the longitudinal stiffness of three CMP ranging from 4 to 8 feet in diameter were measured in the laboratory. Because only three tests were conducted, a theoretical model to evaluate the stiffness of pipes of a variety of gages and corrugation geometries was also developed. The experimental results indicated a "stiffness" EI in the range of 9.11 x 10^5 k-in^2 to 34.43 x 10^5 k-in^2 for the three pipes with the larger diameter pipes having greater stiffness. The theoretical model developed conservatively estimates these stiffnesses.
Resumo:
This investigation is the final phase of a three part study whose overall objectives were to determine if a restraining force is required to prevent inlet uplift failures in corrugated metal pipe (CMP) installations, and to develop a procedure for calculating the required force when restraint is required. In the initial phase of the study (HR-306), the extent of the uplift problem in Iowa was determined and the forces acting on a CMP were quantified. In the second phase of the study (HR- 332), laboratory and field tests were conducted. Laboratory tests measured the longitudinal stiffness ofCMP and a full scale field test on a 3.05 m (10 ft) diameter CMP with 0.612 m (2 ft) of cover determined the soil-structure interaction in response to uplift forces. Reported herein are the tasks that were completed in the final phase of the study. In this phase, a buried 2.44 m (8 ft) CMP was tested with and without end-restraint and with various configurations of soil at the inlet end of the pipe. A total of four different soil configurations were tested; in all tests the soil cover was constant at 0.61 m (2 ft). Data from these tests were used to verify the finite element analysis model (FEA) that was developed in this phase of the research. Both experiments and analyses indicate that the primary soil contribution to uplift resistance occurs in the foreslope and that depth of soil cover does not affect the required tiedown force. Using the FEA, design charts were developed with which engineers can determine for a given situation if restraint force is required to prevent an uplift failure. If an engineer determines restraint is needed, the design charts provide the magnitude of the required force. The design charts are applicable to six gages of CMP for four flow conditions and two types of soil.
Resumo:
This project utilized information from ground penetrating radar (GPR) and visual inspection via the pavement profile scanner (PPS) in proof-of-concept trials. GPR tests were carried out on a variety of portland cement concrete pavements and laboratory concrete specimens. Results indicated that the higher frequency GPR antennas were capable of detecting subsurface distress in two of the three pavement sites investigated. However, the GPR systems failed to detect distress in one pavement site that exhibited extensive cracking. Laboratory experiments indicated that moisture conditions in the cracked pavement probably explain the failure. Accurate surveys need to account for moisture in the pavement slab. Importantly, however, once the pavement site exhibits severe surface cracking, there is little need for GPR, which is primarily used to detect distress that is not observed visually. Two visual inspections were also conducted for this study by personnel from Mandli Communications, Inc., and the Iowa Department of Transportation (DOT). The surveys were conducted using an Iowa DOT video log van that Mandli had fitted with additional equipment. The first survey was an extended demonstration of the PPS system. The second survey utilized the PPS with a downward imaging system that provided high-resolution pavement images. Experimental difficulties occurred during both studies; however, enough information was extracted to consider both surveys successful in identifying pavement surface distress. The results obtained from both GPR testing and visual inspections were helpful in identifying sites that exhibited materials-related distress, and both were considered to have passed the proof-of-concept trials. However, neither method can currently diagnose materials-related distress. Both techniques only detected the symptoms of materials-related distress; the actual diagnosis still relied on coring and subsequent petrographic examination. Both technologies are currently in rapid development, and the limitations may be overcome as the technologies advance and mature.
Resumo:
The Quality Management Earthwork (QM-E) special provision was implemented on a pilot project to evaluate quality control (QC) and quality assurance (QA) testing in predominately unsuitable soils. Control limits implemented on this pilot project included the following: 95% relative compaction, moisture content not exceeding +/- 2% of optimum moisture content, soil strength not exceeding a dynamic cone penetrometer (DCP) index of 70 mm/blow, vertical uniformity not exceeding a variation in DCP index of 40 mm/blow, and lift thickness not exceeding depth determined through construction of control strips. Four-point moving averages were used to allow for some variability in the measured parameter values. Management of the QC/QA data proved to be one of the most challenging aspects of the pilot project. Implementing use of the G-RAD data collection system has considerable potential to reduce the time required to develop and maintain QC/QA records for projects using the QM-E special provision. In many cases, results of a single Proctor test were used to establish control limits that were used for several months without retesting. While the data collected for the pilot project indicated that the DCP index control limits could be set more tightly, there is not enough evidence to support making a change. In situ borings, sampling, and testing in natural unsuitable cut material and compacted fill material revealed that the compacted fill had similar strength characteristics to that of the natural cut material after less than three months from the start of construction.
Resumo:
Over three years the postharvest quality of 'Marli' peaches harvested from the integrated (IFP) and conventional production (CFP) systems was evaluated. The peaches were harvested from commercial orchards of Prunus persica at two locations close to the city of São Jerônimo, RS, Brazil, and stored at 0.5°C for 10, 20 or 30 days. The peaches were evaluated at harvest, at retrieval from storage and after ripening periods at 20°C. No differences in fruit weight losses were determined. Decay incidence was low, and no differences were detected amongst systems in both 2001 and 2002 seasons, but in the 2000 season CFP peaches were more decayed. Flesh firmness of peaches from the IFP system were greater than CFP fruits in the years 2000 and 2001. In 2002, firmness changed little during storage and ripening. Peaches from the IFP in 2000 had higher titratable acidity and lower soluble solids. In the 2000 season, flesh browning was observed in decayed fruits, always at ripening after 20 or 30 days of cold storage Chilling injuries such as browning, woolliness and leatheriness ocurred in 2002. There were no differences amongst systems related to peach quality.
Resumo:
A multifaceted investigation was undertaken to develop recommendations for methods to stabilize granular road shoulders with the goal of mitigating edge ruts. Included was reconnaissance of problematic shoulder locations, a laboratory study to develop a method to test for changes in granular material stability when stabilizing agents are used, and the construction of three sets of test sections under traffic at locations with problematic granular shoulders. Full results of this investigation are included in this report and its appendices. This report also presents conclusions and recommendations based on the study results.
Resumo:
Correlates of immune-mediated protection to most viral and cancer vaccines are still unknown. This impedes the development of novel vaccines to incurable diseases such as HIV and cancer. In this study, we have used functional genomics and polychromatic flow cytometry to define the signature of the immune response to the yellow fever (YF) vaccine 17D (YF17D) in a cohort of 40 volunteers followed for up to 1 yr after vaccination. We show that immunization with YF17D leads to an integrated immune response that includes several effector arms of innate immunity, including complement, the inflammasome, and interferons, as well as adaptive immunity as shown by an early T cell response followed by a brisk and variable B cell response. Development of these responses is preceded, as demonstrated in three independent vaccination trials and in a novel in vitro system of primary immune responses (modular immune in vitro construct [MIMIC] system), by the coordinated up-regulation of transcripts for specific transcription factors, including STAT1, IRF7, and ETS2, which are upstream of the different effector arms of the immune response. These results clearly show that the immune response to a strong vaccine is preceded by coordinated induction of master transcription factors that lead to the development of a broad, polyfunctional, and persistent immune response that integrates all effector cells of the immune system.
Resumo:
BACKGROUND: Most patients with glioblastoma are older than 60 years, but treatment guidelines are based on trials in patients aged only up to 70 years. We did a randomised trial to assess the optimum palliative treatment in patients aged 60 years and older with glioblastoma. METHODS: Patients with newly diagnosed glioblastoma were recruited from Austria, Denmark, France, Norway, Sweden, Switzerland, and Turkey. They were assigned by a computer-generated randomisation schedule, stratified by centre, to receive temozolomide (200 mg/m(2) on days 1-5 of every 28 days for up to six cycles), hypofractionated radiotherapy (34·0 Gy administered in 3·4 Gy fractions over 2 weeks), or standard radiotherapy (60·0 Gy administered in 2·0 Gy fractions over 6 weeks). Patients and study staff were aware of treatment assignment. The primary endpoint was overall survival. Analyses were done by intention to treat. This trial is registered, number ISRCTN81470623. FINDINGS: 342 patients were enrolled, of whom 291 were randomised across three treatment groups (temozolomide n=93, hypofractionated radiotherapy n=98, standard radiotherapy n=100) and 51 of whom were randomised across only two groups (temozolomide n=26, hypofractionated radiotherapy n=25). In the three-group randomisation, in comparison with standard radiotherapy, median overall survival was significantly longer with temozolomide (8·3 months [95% CI 7·1-9·5; n=93] vs 6·0 months [95% CI 5·1-6·8; n=100], hazard ratio [HR] 0·70; 95% CI 0·52-0·93, p=0·01), but not with hypofractionated radiotherapy (7·5 months [6·5-8·6; n=98], HR 0·85 [0·64-1·12], p=0·24). For all patients who received temozolomide or hypofractionated radiotherapy (n=242) overall survival was similar (8·4 months [7·3-9·4; n=119] vs 7·4 months [6·4-8·4; n=123]; HR 0·82, 95% CI 0·63-1·06; p=0·12). For age older than 70 years, survival was better with temozolomide and with hypofractionated radiotherapy than with standard radiotherapy (HR for temozolomide vs standard radiotherapy 0·35 [0·21-0·56], p<0·0001; HR for hypofractionated vs standard radiotherapy 0·59 [95% CI 0·37-0·93], p=0·02). Patients treated with temozolomide who had tumour MGMT promoter methylation had significantly longer survival than those without MGMT promoter methylation (9·7 months [95% CI 8·0-11·4] vs 6·8 months [5·9-7·7]; HR 0·56 [95% CI 0·34-0·93], p=0·02), but no difference was noted between those with methylated and unmethylated MGMT promoter treated with radiotherapy (HR 0·97 [95% CI 0·69-1·38]; p=0·81). As expected, the most common grade 3-4 adverse events in the temozolomide group were neutropenia (n=12) and thrombocytopenia (n=18). Grade 3-5 infections in all randomisation groups were reported in 18 patients. Two patients had fatal infections (one in the temozolomide group and one in the standard radiotherapy group) and one in the temozolomide group with grade 2 thrombocytopenia died from complications after surgery for a gastrointestinal bleed. INTERPRETATION: Standard radiotherapy was associated with poor outcomes, especially in patients older than 70 years. Both temozolomide and hypofractionated radiotherapy should be considered as standard treatment options in elderly patients with glioblastoma. MGMT promoter methylation status might be a useful predictive marker for benefit from temozolomide. FUNDING: Merck, Lion's Cancer Research Foundation, University of Umeå, and the Swedish Cancer Society.
Resumo:
OBJECTIVE: The objective of this study was to analyse the long-term mortality and morbidity of a group of patients undergoing thrombolysis during the acute phase of myocardial infarction and to determine the factors influencing the prognosis. One hundred and seventy five patients (149 mean and 26 women, mean age: 54 years) were included in a randomized study, comparing the efficacy of 2 thrombolytic substances administered during the acute phase of myocardial infarction. A standard questionnaire was sent to the various attending physicians to follow-up of these 175 patients. RESULTS: The hospital mortality was 5% (9 patients) and 14 patients (9%) died after a mean follow-up of 4.3 +/- 2.1 years. The 5-year actuarial survival was 81%. Fourteen patients (8%) were lost to follow-up and 49 patients (32%) underwent surgical or percutaneous revascularization during follow-up. Revascularized patients had a significantly better survival than non-revascularized patients. The mean left ventricular ejection fraction of patients who died was lower (48% versus 71%) than that of survivors. Patients with an ejection fraction < 40% also had a significantly lower survival (p = 0.01). Patency of the vessel after thrombolysis was associated with a slightly better survival; this difference was not significant. The ejection fraction at 6 month was also significantly higher (60 +/- 10% versus 49 +/- 11%) for patients with a patent artery. Three risk factors for death or reinfarction were identified: age > 65 years at the time of infarction, disease in more than one coronary vessel and absence of angina pectoris before infarction. The probability of a coronary accident varied from 2 to 88% according to the number of risk factors present. At the time of follow-up, 60% of patients presented hypercholesterolaemia versus only 7% before infarction 73% of patients received anticoagulant or antiaggregant treatment and 81% of patients were asymptomatic. CONCLUSION: The mortality and the acute and long-term morbidity of myocardial infarction remain high, as only 34% of our patients did not develop any events during follow-up, despite serious medical management and follow-up. The ejection fraction has an important prognostic value. Patient management should take the abovementioned risk factors into account.
Resumo:
Les problèmes d'écoulements multiphasiques en média poreux sont d'un grand intérêt pour de nombreuses applications scientifiques et techniques ; comme la séquestration de C02, l'extraction de pétrole et la dépollution des aquifères. La complexité intrinsèque des systèmes multiphasiques et l'hétérogénéité des formations géologiques sur des échelles multiples représentent un challenge majeur pour comprendre et modéliser les déplacements immiscibles dans les milieux poreux. Les descriptions à l'échelle supérieure basées sur la généralisation de l'équation de Darcy sont largement utilisées, mais ces méthodes sont sujettes à limitations pour les écoulements présentant de l'hystérèse. Les avancées récentes en terme de performances computationnelles et le développement de méthodes précises pour caractériser l'espace interstitiel ainsi que la distribution des phases ont favorisé l'utilisation de modèles qui permettent une résolution fine à l'échelle du pore. Ces modèles offrent un aperçu des caractéristiques de l'écoulement qui ne peuvent pas être facilement observées en laboratoire et peuvent être utilisé pour expliquer la différence entre les processus physiques et les modèles à l'échelle macroscopique existants. L'objet premier de la thèse se porte sur la simulation numérique directe : les équations de Navier-Stokes sont résolues dans l'espace interstitiel et la méthode du volume de fluide (VOF) est employée pour suivre l'évolution de l'interface. Dans VOF, la distribution des phases est décrite par une fonction fluide pour l'ensemble du domaine et des conditions aux bords particulières permettent la prise en compte des propriétés de mouillage du milieu poreux. Dans la première partie de la thèse, nous simulons le drainage dans une cellule Hele-Shaw 2D avec des obstacles cylindriques. Nous montrons que l'approche proposée est applicable même pour des ratios de densité et de viscosité très importants et permet de modéliser la transition entre déplacement stable et digitation visqueuse. Nous intéressons ensuite à l'interprétation de la pression capillaire à l'échelle macroscopique. Nous montrons que les techniques basées sur la moyenne spatiale de la pression présentent plusieurs limitations et sont imprécises en présence d'effets visqueux et de piégeage. Au contraire, une définition basée sur l'énergie permet de séparer les contributions capillaires des effets visqueux. La seconde partie de la thèse est consacrée à l'investigation des effets d'inertie associés aux reconfigurations irréversibles du ménisque causé par l'interface des instabilités. Comme prototype pour ces phénomènes, nous étudions d'abord la dynamique d'un ménisque dans un pore angulaire. Nous montrons que, dans un réseau de pores cubiques, les sauts et reconfigurations sont si fréquents que les effets d'inertie mènent à différentes configurations des fluides. A cause de la non-linéarité du problème, la distribution des fluides influence le travail des forces de pression, qui, à son tour, provoque une chute de pression dans la loi de Darcy. Cela suggère que ces phénomènes devraient être pris en compte lorsque que l'on décrit l'écoulement multiphasique en média poreux à l'échelle macroscopique. La dernière partie de la thèse s'attache à démontrer la validité de notre approche par une comparaison avec des expériences en laboratoire : un drainage instable dans un milieu poreux quasi 2D (une cellule Hele-Shaw avec des obstacles cylindriques). Plusieurs simulations sont tournées sous différentes conditions aux bords et en utilisant différents modèles (modèle intégré 2D et modèle 3D) afin de comparer certaines quantités macroscopiques avec les observations au laboratoire correspondantes. Malgré le challenge de modéliser des déplacements instables, où, par définition, de petites perturbations peuvent grandir sans fin, notre approche numérique apporte de résultats satisfaisants pour tous les cas étudiés. - Problems involving multiphase flow in porous media are of great interest in many scientific and engineering applications including Carbon Capture and Storage, oil recovery and groundwater remediation. The intrinsic complexity of multiphase systems and the multi scale heterogeneity of geological formations represent the major challenges to understand and model immiscible displacement in porous media. Upscaled descriptions based on generalization of Darcy's law are widely used, but they are subject to several limitations for flow that exhibit hysteric and history- dependent behaviors. Recent advances in high performance computing and the development of accurate methods to characterize pore space and phase distribution have fostered the use of models that allow sub-pore resolution. These models provide an insight on flow characteristics that cannot be easily achieved by laboratory experiments and can be used to explain the gap between physical processes and existing macro-scale models. We focus on direct numerical simulations: we solve the Navier-Stokes equations for mass and momentum conservation in the pore space and employ the Volume Of Fluid (VOF) method to track the evolution of the interface. In the VOF the distribution of the phases is described by a fluid function (whole-domain formulation) and special boundary conditions account for the wetting properties of the porous medium. In the first part of this thesis we simulate drainage in a 2-D Hele-Shaw cell filled with cylindrical obstacles. We show that the proposed approach can handle very large density and viscosity ratios and it is able to model the transition from stable displacement to viscous fingering. We then focus on the interpretation of the macroscopic capillary pressure showing that pressure average techniques are subject to several limitations and they are not accurate in presence of viscous effects and trapping. On the contrary an energy-based definition allows separating viscous and capillary contributions. In the second part of the thesis we investigate inertia effects associated with abrupt and irreversible reconfigurations of the menisci caused by interface instabilities. As a prototype of these phenomena we first consider the dynamics of a meniscus in an angular pore. We show that in a network of cubic pores, jumps and reconfigurations are so frequent that inertia effects lead to different fluid configurations. Due to the non-linearity of the problem, the distribution of the fluids influences the work done by pressure forces, which is in turn related to the pressure drop in Darcy's law. This suggests that these phenomena should be taken into account when upscaling multiphase flow in porous media. The last part of the thesis is devoted to proving the accuracy of the numerical approach by validation with experiments of unstable primary drainage in a quasi-2D porous medium (i.e., Hele-Shaw cell filled with cylindrical obstacles). We perform simulations under different boundary conditions and using different models (2-D integrated and full 3-D) and we compare several macroscopic quantities with the corresponding experiment. Despite the intrinsic challenges of modeling unstable displacement, where by definition small perturbations can grow without bounds, the numerical method gives satisfactory results for all the cases studied.