958 resultados para Probabilistic generalization
Resumo:
Introducció: Els errors de medicació són definits com qualsevol incident prevenible que pot causar dany al pacient o donar lloc a una utilització inapropiada dels medicaments, quan aquests estan sota el control dels professionals sanitaris o del pacient. Els errors en la preparació i l’administració de medicació són els més comuns de l’àrea hospitalària i, tot i la llarga cadena per la qual passa el fàrmac, el professional d’infermeria és el últim responsable de l’acció, tenint així, un paper molt important en la seguretat del pacient. Les infermeres dediquen el 40% del temps de la seva jornada laboral en tasques relacionades amb la medicació. Objectiu: Determinar si les infermeres produeixen més errors si treballen amb sistemes de distribució de medicació de stock o en sistemes de distribució unidosis de medicació. Metodologia: Estudi quantitatiu, observacional i descriptiu, on la notificació d’errors (o oportunitats d’error) realitzats per la infermera, en les fases de preparació i administració de medicació, es farà mitjançant un qüestionari autoelaborat. Els elements a identificar seran: el tipus d’error, les causes que poden haver--‐lo produït, la seva potencial gravetat i qui l’ha pogut evitar; així com el tipus de professional que l’ha produït. Altres dades rellevants són: el medicament implicat junt amb la dosis i la via d’administració i el sistema de distribució utilitzat. Mostreig i mostra: El mostreig serà no probabilístic i per conveniència. S’escolliran aquelles infermeres que l’investigador consideri amb les característiques necessàries per participar en l’estudi, així que la mostra estarà formada per les infermeres les quals treballen a la unitat 40 de l’Hospital del Mar i utilitzen un sistema de distribució de medicació de dosis unitàries i les infermeres que treballen a urgències (concretament a l’àrea de nivell dos) de l’Hospital del Mar les quals treballen amb un sistema de distribució de medicació de stock.
Resumo:
En el treball que es mostra a continuació podrem observar un estudi sobre la preparació física aplicada a l’esport del futbol. A més, farem un estudi dels nivells del volum màxim d’Oxigen (Vo2màx) a través d’un procés d’obtenció de resultats indirecte (Test de Course Navette) tenint present el nombre de sessions setmanals. Observarem les diferències en la definició del paràmetre segons Zintl (1991), Diéguez (1997), Gómez, Aranda i Ferrer (2010), entre d’altres. També observarem els nivells que s’han de tenir a partir d’autors com Zintl (1991), Rivera i Avella (1992), Sánchez i Salas (2008). Per dur-ho a terme, farem un estudi hipotètic-deductiu a través d’un mostreig no probabilístic. Observarem els valors del Vo2màx. assolits en el període preparatori, comparats amb el manteniment, la millora o la pèrdua d’aquest després d’un període d’entrenament diferenciat entre una i tres sessions. Un cop realitzat el procés, hem pogut veure que els nivells assolits després del període preparatori (60.87+ 8.81 ml/kg/min) són majors respecte l’inici del període (44.38 +8.92 ml/kg/min.). Posteriorment, al observar les diferències entre els dos grups de la mostra, podem afirmar que amb tres sessions realitzades no es poden mantenir els nivells anteriors (57.80 +10.16) respecte els (56.73 +9.24) de l’últim test. Per últim, el grup 2, amb una única sessió setmanal els valors baixen més, es passa dels (63.93 + 6.24 ml/kg/min.) als (56.50 + 7.52 ml/kg/min)
Resumo:
Normal and abnormal brains can be segmented by registering the target image with an atlas. Here, an atlas is defined as the combination of an intensity image (template) and its segmented image (the atlas labels). After registering the atlas template and the target image, the atlas labels are propagated to the target image. We define this process as atlas-based segmentation. In recent years, researchers have investigated registration algorithms to match atlases to query subjects and also strategies for atlas construction. In this paper we present a review of the automated approaches for atlas-based segmentation of magnetic resonance brain images. We aim to point out the strengths and weaknesses of atlas-based methods and suggest new research directions. We use two different criteria to present the methods. First, we refer to the algorithms according to their atlas-based strategy: label propagation, multi-atlas methods, and probabilistic techniques. Subsequently, we classify the methods according to their medical target: the brain and its internal structures, tissue segmentation in healthy subjects, tissue segmentation in fetus, neonates and elderly subjects, and segmentation of damaged brains. A quantitative comparison of the results reported in the literature is also presented.
Resumo:
We propose a novel multifactor dimensionality reduction method for epistasis detection in small or extended pedigrees, FAM-MDR. It combines features of the Genome-wide Rapid Association using Mixed Model And Regression approach (GRAMMAR) with Model-Based MDR (MB-MDR). We focus on continuous traits, although the method is general and can be used for outcomes of any type, including binary and censored traits. When comparing FAM-MDR with Pedigree-based Generalized MDR (PGMDR), which is a generalization of Multifactor Dimensionality Reduction (MDR) to continuous traits and related individuals, FAM-MDR was found to outperform PGMDR in terms of power, in most of the considered simulated scenarios. Additional simulations revealed that PGMDR does not appropriately deal with multiple testing and consequently gives rise to overly optimistic results. FAM-MDR adequately deals with multiple testing in epistasis screens and is in contrast rather conservative, by construction. Furthermore, simulations show that correcting for lower order (main) effects is of utmost importance when claiming epistasis. As Type 2 Diabetes Mellitus (T2DM) is a complex phenotype likely influenced by gene-gene interactions, we applied FAM-MDR to examine data on glucose area-under-the-curve (GAUC), an endophenotype of T2DM for which multiple independent genetic associations have been observed, in the Amish Family Diabetes Study (AFDS). This application reveals that FAM-MDR makes more efficient use of the available data than PGMDR and can deal with multi-generational pedigrees more easily. In conclusion, we have validated FAM-MDR and compared it to PGMDR, the current state-of-the-art MDR method for family data, using both simulations and a practical dataset. FAM-MDR is found to outperform PGMDR in that it handles the multiple testing issue more correctly, has increased power, and efficiently uses all available information.
Resumo:
Methods for the extraction of features from physiological datasets are growing needs as clinical investigations of Alzheimer’s disease (AD) in large and heterogeneous population increase. General tools allowing diagnostic regardless of recording sites, such as different hospitals, are essential and if combined to inexpensive non-invasive methods could critically improve mass screening of subjects with AD. In this study, we applied three state of the art multiway array decomposition (MAD) methods to extract features from electroencephalograms (EEGs) of AD patients obtained from multiple sites. In comparison to MAD, spectral-spatial average filter (SSFs) of control and AD subjects were used as well as a common blind source separation method, algorithm for multiple unknown signal extraction (AMUSE). We trained a feed-forward multilayer perceptron (MLP) to validate and optimize AD classification from two independent databases. Using a third EEG dataset, we demonstrated that features extracted from MAD outperformed features obtained from SSFs AMUSE in terms of root mean squared error (RMSE) and reaching up to 100% of accuracy in test condition. We propose that MAD maybe a useful tool to extract features for AD diagnosis offering great generalization across multi-site databases and opening doors to the discovery of new characterization of the disease.
Resumo:
The bibliographical revision of national gerontological literature reveals some confusion in the use of terms that refer to mechanical contentions, lack of dialogue with regards to ethical conflicts that suggest their use, a significant generalization of the claims against and the absence of positive references despite its high prevalence as shown by some authors. This paper presents some technical proposals on the definition, the use of terms and the use of mechanical contentions in the social sphere such as putting the ethical dialogue before the argumentation based on the prevalence, define them in terms of their intent, agree on a classification of the different contention methods, identify the types and levels of risk and intervene specifically and in terms thereof. Finally, recommendations are added with regards to risks, the decision process, prescription and the withdrawal process.
Resumo:
Both, Bayesian networks and probabilistic evaluation are gaining more and more widespread use within many professional branches, including forensic science. Notwithstanding, they constitute subtle topics with definitional details that require careful study. While many sophisticated developments of probabilistic approaches to evaluation of forensic findings may readily be found in published literature, there remains a gap with respect to writings that focus on foundational aspects and on how these may be acquired by interested scientists new to these topics. This paper takes this as a starting point to report on the learning about Bayesian networks for likelihood ratio based, probabilistic inference procedures in a class of master students in forensic science. The presentation uses an example that relies on a casework scenario drawn from published literature, involving a questioned signature. A complicating aspect of that case study - proposed to students in a teaching scenario - is due to the need of considering multiple competing propositions, which is an outset that may not readily be approached within a likelihood ratio based framework without drawing attention to some additional technical details. Using generic Bayesian networks fragments from existing literature on the topic, course participants were able to track the probabilistic underpinnings of the proposed scenario correctly both in terms of likelihood ratios and of posterior probabilities. In addition, further study of the example by students allowed them to derive an alternative Bayesian network structure with a computational output that is equivalent to existing probabilistic solutions. This practical experience underlines the potential of Bayesian networks to support and clarify foundational principles of probabilistic procedures for forensic evaluation.
Resumo:
Helping behavior is any intentional behavior that benefits another living being or group (Hogg & Vaughan, 2010). People tend to underestimate the probability that others will comply with their direct requests for help (Flynn & Lake, 2008). This implies that when they need help, they will assess the probability of getting it (De Paulo, 1982, cited in Flynn & Lake, 2008) and then they will tend to estimate one that is actually lower than the real chance, so they may not even consider worth asking for it. Existing explanations for this phenomenon attribute it to a mistaken cost computation by the help seeker, who will emphasize the instrumental cost of “saying yes”, ignoring that the potential helper also needs to take into account the social cost of saying “no”. And the truth is that, especially in face-to-face interactions, the discomfort caused by refusing to help can be very high. In short, help seekers tend to fail to realize that it might be more costly to refuse to comply with a help request rather than accepting. A similar effect has been observed when estimating trustworthiness of people. Fetchenhauer and Dunning (2010) showed that people also tend to underestimate it. This bias is reduced when, instead of asymmetric feedback (getting feedback only when deciding to trust the other person), symmetric feedback (always given) was provided. This cause could as well be applicable to help seeking as people only receive feedback when they actually make their request but not otherwise. Fazio, Shook, and Eiser (2004) studied something that could be reinforcing these outcomes: Learning asymmetries. By means of a computer game called BeanFest, they showed that people learn better about negatively valenced objects (beans in this case) than about positively valenced ones. This learning asymmetry esteemed from “information gain being contingent on approach behavior” (p. 293), which could be identified with what Fetchenhauer and Dunning mention as ‘asymmetric feedback’, and hence also with help requests. Fazio et al. also found a generalization asymmetry in favor of negative attitudes versus positive ones. They attributed it to a negativity bias that “weights resemblance to a known negative more heavily than resemblance to a positive” (p. 300). Applied to help seeking scenarios, this would mean that when facing an unknown situation, people would tend to generalize and infer that is more likely that they get a negative rather than a positive outcome from it, so, along with what it was said before, people will be more inclined to think that they will get a “no” when requesting help. Denrell and Le Mens (2011) present a different perspective when trying to explain judgment biases in general. They deviate from the classical inappropriate information processing (depicted among other by Fiske & Taylor, 2007, and Tversky & Kahneman, 1974) and explain this in terms of ‘adaptive sampling’. Adaptive sampling is a sampling mechanism in which the selection of sample items is conditioned by the values of the variable of interest previously observed (Thompson, 2011). Sampling adaptively allows individuals to safeguard themselves from experiences they went through once and turned out to lay negative outcomes. However, it also prevents them from giving a second chance to those experiences to get an updated outcome that could maybe turn into a positive one, a more positive one, or just one that regresses to the mean, whatever direction that implies. That, as Denrell and Le Mens (2011) explained, makes sense: If you go to a restaurant, and you did not like the food, you do not choose that restaurant again. This is what we think could be happening when asking for help: When we get a “no”, we stop asking. And here, we want to provide a complementary explanation for the underestimation of the probability that others comply with our direct help requests based on adaptive sampling. First, we will develop and explain a model that represents the theory. Later on, we will test it empirically by means of experiments, and will elaborate on the analysis of its results.
Resumo:
From the point of view of uniform bounds for the birationality of pluricanonical maps, irregular varieties of general type and maximal Albanese dimension behave similarly to curves. In fact Chen-Hacon showed that, at least when their holomorphic Euler characteristic is positive, the tricanonical map of such varieties is always birational. In this paper we study the bicanonical map. We consider the natural subclass of varieties of maximal Albanese dimension formed by primitive varieties of Albanese general type. We prove that the only such varieties with non-birational bicanonical map are the natural higher-dimensional generalization to this context of curves of genus $2$: varieties birationally equivalent to the theta-divisor of an indecomposable principally polarized abelian variety. The proof is based on the (generalized) Fourier-Mukai transform.
Resumo:
The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.
Resumo:
Statistical properties of binary complex networks are well understood and recently many attempts have been made to extend this knowledge to weighted ones. There are, however, subtle yet important considerations to be made regarding the nature of the weights used in this generalization. Weights can be either continuous or discrete magnitudes, and in the latter case, they can additionally have undistinguishable or distinguishable nature. This fact has not been addressed in the literature insofar and has deep implications on the network statistics. In this work we face this problem introducing multiedge networks as graphs where multiple (distinguishable) connections between nodes are considered. We develop a statistical mechanics framework where it is possible to get information about the most relevant observables given a large spectrum of linear and nonlinear constraints including those depending both on the number of multiedges per link and their binary projection. The latter case is particularly interesting as we show that binary projections can be understood from multiedge processes. The implications of these results are important as many real-agent-based problems mapped onto graphs require this treatment for a proper characterization of their collective behavior.
Resumo:
The purpose of this article is to treat a currently much debated issue, the effects of age on second language learning. To do so, we contrast data collected by our research team from over one thousand seven hundred young and adult learners with four popular beliefs or generalizations, which, while deeply rooted in this society, are not always corroborated by our data.Two of these generalizations about Second Language Acquisition (languages spoken in the social context) seem to be widely accepted: a) older children, adolescents and adults are quicker and more efficient at the first stages of learning than are younger learners; b) in a natural context children with an early start are more liable to attain higher levels of proficiency. However, in the context of Foreign Language Acquisition, the context in which we collect the data, this second generalization is difficult to verify due to the low number of instructional hours (a maximum of some 800 hours) and the lower levels of language exposure time provided. The design of our research project has allowed us to study differences observed with respect to the age of onset (ranging from 2 to 18+), but in this article we focus on students who began English instruction at the age of 8 (LOGSE Educational System) and those who began at the age of 11 (EGB). We have collected data from both groups after a period of 200 (Time 1) and 416 instructional hours (Time 2), and we are currently collecting data after a period of 726 instructional hours (Time 3). We have designed and administered a variety of tests: tests on English production and reception, both oral and written, and within both academic and communicative oriented approaches, on the learners' L1 (Spanish and Catalan), as well as a questionnaire eliciting personal and sociolinguistic information. The questions we address and the relevant empirical evidence are as follows: 1. "For young children, learning languages is a game. They enjoy it more than adults."Our data demonstrate that the situation is not quite so. Firstly, both at the levels of Primary and Secondary education (ranging from 70.5% in 11-year-olds to 89% in 14-year-olds) students have a positive attitude towards learning English. Secondly, there is a difference between the two groups with respect to the factors they cite as responsible for their motivation to learn English: the younger students cite intrinsic factors, such as the games they play, the methodology used and the teacher, whereas the older students cite extrinsic factors, such as the role of their knowledge of English in the achievement of their future professional goals. 2 ."Young children have more resources to learn languages." Here our data suggest just the opposite. The ability to employ learning strategies (actions or steps used) increases with age. Older learners' strategies are more varied and cognitively more complex. In contrast, younger learners depend more on their interlocutor and external resources and therefore have a lower level of autonomy in their learning. 3. "Young children don't talk much but understand a lot"This third generalization does seem to be confirmed, at least to a certain extent, by our data in relation to the analysis of differences due to the age factor and productive use of the target language. As seen above, the comparably slower progress of the younger learners is confirmed. Our analysis of interpersonal receptive abilities demonstrates as well the advantage of the older learners. Nevertheless, with respect to passive receptive activities (for example, simple recognition of words or sentences) no great differences are observed. Statistical analyses suggest that in this test, in contrast to the others analyzed, the dominance of the subjects' L1s (reflecting a cognitive capacity that grows with age) has no significant influence on the learning process. 4. "The sooner they begin, the better their results will be in written language"This is not either completely confirmed in our research. First of all, we perceive that certain compensatory strategies disappear only with age, but not with the number of instructional hours. Secondly, given an identical number of instructional hours, the older subjects obtain better results. With respect to our analysis of data from subjects of the same age (12 years old) but with a different number of instructional hours (200 and 416 respectively, as they began at the ages of 11 and 8), we observe that those who began earlier excel only in the area of lexical fluency. In conclusion, the superior rate of older learners appears to be due to their higher level of cognitive development, a factor which allows them to benefit more from formal or explicit instruction in the school context. Younger learners, however, do not benefit from the quantity and quality of linguistic exposure typical of a natural acquisition context in which they would be allowed to make use of implicit learning abilities. It seems clear, then, that the initiative in this country to begin foreign language instruction earlier will have positive effects only if it occurs in combination with either higher levels of exposure time to the foreign language, or, alternatively, with its use as the language of instruction in other areas of the curriculum.
Resumo:
The purpose of this article is to treat a currently much debated issue, the effects of age on second language learning. To do so, we contrast data collected by our research team from over one thousand seven hundred young and adult learners with four popular beliefs or generalizations, which, while deeply rooted in this society, are not always corroborated by our data.Two of these generalizations about Second Language Acquisition (languages spoken in the social context) seem to be widely accepted: a) older children, adolescents and adults are quicker and more efficient at the first stages of learning than are younger learners; b) in a natural context children with an early start are more liable to attain higher levels of proficiency. However, in the context of Foreign Language Acquisition, the context in which we collect the data, this second generalization is difficult to verify due to the low number of instructional hours (a maximum of some 800 hours) and the lower levels of language exposure time provided. The design of our research project has allowed us to study differences observed with respect to the age of onset (ranging from 2 to 18+), but in this article we focus on students who began English instruction at the age of 8 (LOGSE Educational System) and those who began at the age of 11 (EGB). We have collected data from both groups after a period of 200 (Time 1) and 416 instructional hours (Time 2), and we are currently collecting data after a period of 726 instructional hours (Time 3). We have designed and administered a variety of tests: tests on English production and reception, both oral and written, and within both academic and communicative oriented approaches, on the learners' L1 (Spanish and Catalan), as well as a questionnaire eliciting personal and sociolinguistic information. The questions we address and the relevant empirical evidence are as follows: 1. "For young children, learning languages is a game. They enjoy it more than adults."Our data demonstrate that the situation is not quite so. Firstly, both at the levels of Primary and Secondary education (ranging from 70.5% in 11-year-olds to 89% in 14-year-olds) students have a positive attitude towards learning English. Secondly, there is a difference between the two groups with respect to the factors they cite as responsible for their motivation to learn English: the younger students cite intrinsic factors, such as the games they play, the methodology used and the teacher, whereas the older students cite extrinsic factors, such as the role of their knowledge of English in the achievement of their future professional goals. 2 ."Young children have more resources to learn languages." Here our data suggest just the opposite. The ability to employ learning strategies (actions or steps used) increases with age. Older learners' strategies are more varied and cognitively more complex. In contrast, younger learners depend more on their interlocutor and external resources and therefore have a lower level of autonomy in their learning. 3. "Young children don't talk much but understand a lot"This third generalization does seem to be confirmed, at least to a certain extent, by our data in relation to the analysis of differences due to the age factor and productive use of the target language. As seen above, the comparably slower progress of the younger learners is confirmed. Our analysis of interpersonal receptive abilities demonstrates as well the advantage of the older learners. Nevertheless, with respect to passive receptive activities (for example, simple recognition of words or sentences) no great differences are observed. Statistical analyses suggest that in this test, in contrast to the others analyzed, the dominance of the subjects' L1s (reflecting a cognitive capacity that grows with age) has no significant influence on the learning process. 4. "The sooner they begin, the better their results will be in written language"This is not either completely confirmed in our research. First of all, we perceive that certain compensatory strategies disappear only with age, but not with the number of instructional hours. Secondly, given an identical number of instructional hours, the older subjects obtain better results. With respect to our analysis of data from subjects of the same age (12 years old) but with a different number of instructional hours (200 and 416 respectively, as they began at the ages of 11 and 8), we observe that those who began earlier excel only in the area of lexical fluency. In conclusion, the superior rate of older learners appears to be due to their higher level of cognitive development, a factor which allows them to benefit more from formal or explicit instruction in the school context. Younger learners, however, do not benefit from the quantity and quality of linguistic exposure typical of a natural acquisition context in which they would be allowed to make use of implicit learning abilities. It seems clear, then, that the initiative in this country to begin foreign language instruction earlier will have positive effects only if it occurs in combination with either higher levels of exposure time to the foreign language, or, alternatively, with its use as the language of instruction in other areas of the curriculum.
Resumo:
We investigate the relevance of morphological operators for the classification of land use in urban scenes using submetric panchromatic imagery. A support vector machine is used for the classification. Six types of filters have been employed: opening and closing, opening and closing by reconstruction, and opening and closing top hat. The type and scale of the filters are discussed, and a feature selection algorithm called recursive feature elimination is applied to decrease the dimensionality of the input data. The analysis performed on two QuickBird panchromatic images showed that simple opening and closing operators are the most relevant for classification at such a high spatial resolution. Moreover, mixed sets combining simple and reconstruction filters provided the best performance. Tests performed on both images, having areas characterized by different architectural styles, yielded similar results for both feature selection and classification accuracy, suggesting the generalization of the feature sets highlighted.
Resumo:
Les problèmes d'écoulements multiphasiques en média poreux sont d'un grand intérêt pour de nombreuses applications scientifiques et techniques ; comme la séquestration de C02, l'extraction de pétrole et la dépollution des aquifères. La complexité intrinsèque des systèmes multiphasiques et l'hétérogénéité des formations géologiques sur des échelles multiples représentent un challenge majeur pour comprendre et modéliser les déplacements immiscibles dans les milieux poreux. Les descriptions à l'échelle supérieure basées sur la généralisation de l'équation de Darcy sont largement utilisées, mais ces méthodes sont sujettes à limitations pour les écoulements présentant de l'hystérèse. Les avancées récentes en terme de performances computationnelles et le développement de méthodes précises pour caractériser l'espace interstitiel ainsi que la distribution des phases ont favorisé l'utilisation de modèles qui permettent une résolution fine à l'échelle du pore. Ces modèles offrent un aperçu des caractéristiques de l'écoulement qui ne peuvent pas être facilement observées en laboratoire et peuvent être utilisé pour expliquer la différence entre les processus physiques et les modèles à l'échelle macroscopique existants. L'objet premier de la thèse se porte sur la simulation numérique directe : les équations de Navier-Stokes sont résolues dans l'espace interstitiel et la méthode du volume de fluide (VOF) est employée pour suivre l'évolution de l'interface. Dans VOF, la distribution des phases est décrite par une fonction fluide pour l'ensemble du domaine et des conditions aux bords particulières permettent la prise en compte des propriétés de mouillage du milieu poreux. Dans la première partie de la thèse, nous simulons le drainage dans une cellule Hele-Shaw 2D avec des obstacles cylindriques. Nous montrons que l'approche proposée est applicable même pour des ratios de densité et de viscosité très importants et permet de modéliser la transition entre déplacement stable et digitation visqueuse. Nous intéressons ensuite à l'interprétation de la pression capillaire à l'échelle macroscopique. Nous montrons que les techniques basées sur la moyenne spatiale de la pression présentent plusieurs limitations et sont imprécises en présence d'effets visqueux et de piégeage. Au contraire, une définition basée sur l'énergie permet de séparer les contributions capillaires des effets visqueux. La seconde partie de la thèse est consacrée à l'investigation des effets d'inertie associés aux reconfigurations irréversibles du ménisque causé par l'interface des instabilités. Comme prototype pour ces phénomènes, nous étudions d'abord la dynamique d'un ménisque dans un pore angulaire. Nous montrons que, dans un réseau de pores cubiques, les sauts et reconfigurations sont si fréquents que les effets d'inertie mènent à différentes configurations des fluides. A cause de la non-linéarité du problème, la distribution des fluides influence le travail des forces de pression, qui, à son tour, provoque une chute de pression dans la loi de Darcy. Cela suggère que ces phénomènes devraient être pris en compte lorsque que l'on décrit l'écoulement multiphasique en média poreux à l'échelle macroscopique. La dernière partie de la thèse s'attache à démontrer la validité de notre approche par une comparaison avec des expériences en laboratoire : un drainage instable dans un milieu poreux quasi 2D (une cellule Hele-Shaw avec des obstacles cylindriques). Plusieurs simulations sont tournées sous différentes conditions aux bords et en utilisant différents modèles (modèle intégré 2D et modèle 3D) afin de comparer certaines quantités macroscopiques avec les observations au laboratoire correspondantes. Malgré le challenge de modéliser des déplacements instables, où, par définition, de petites perturbations peuvent grandir sans fin, notre approche numérique apporte de résultats satisfaisants pour tous les cas étudiés. - Problems involving multiphase flow in porous media are of great interest in many scientific and engineering applications including Carbon Capture and Storage, oil recovery and groundwater remediation. The intrinsic complexity of multiphase systems and the multi scale heterogeneity of geological formations represent the major challenges to understand and model immiscible displacement in porous media. Upscaled descriptions based on generalization of Darcy's law are widely used, but they are subject to several limitations for flow that exhibit hysteric and history- dependent behaviors. Recent advances in high performance computing and the development of accurate methods to characterize pore space and phase distribution have fostered the use of models that allow sub-pore resolution. These models provide an insight on flow characteristics that cannot be easily achieved by laboratory experiments and can be used to explain the gap between physical processes and existing macro-scale models. We focus on direct numerical simulations: we solve the Navier-Stokes equations for mass and momentum conservation in the pore space and employ the Volume Of Fluid (VOF) method to track the evolution of the interface. In the VOF the distribution of the phases is described by a fluid function (whole-domain formulation) and special boundary conditions account for the wetting properties of the porous medium. In the first part of this thesis we simulate drainage in a 2-D Hele-Shaw cell filled with cylindrical obstacles. We show that the proposed approach can handle very large density and viscosity ratios and it is able to model the transition from stable displacement to viscous fingering. We then focus on the interpretation of the macroscopic capillary pressure showing that pressure average techniques are subject to several limitations and they are not accurate in presence of viscous effects and trapping. On the contrary an energy-based definition allows separating viscous and capillary contributions. In the second part of the thesis we investigate inertia effects associated with abrupt and irreversible reconfigurations of the menisci caused by interface instabilities. As a prototype of these phenomena we first consider the dynamics of a meniscus in an angular pore. We show that in a network of cubic pores, jumps and reconfigurations are so frequent that inertia effects lead to different fluid configurations. Due to the non-linearity of the problem, the distribution of the fluids influences the work done by pressure forces, which is in turn related to the pressure drop in Darcy's law. This suggests that these phenomena should be taken into account when upscaling multiphase flow in porous media. The last part of the thesis is devoted to proving the accuracy of the numerical approach by validation with experiments of unstable primary drainage in a quasi-2D porous medium (i.e., Hele-Shaw cell filled with cylindrical obstacles). We perform simulations under different boundary conditions and using different models (2-D integrated and full 3-D) and we compare several macroscopic quantities with the corresponding experiment. Despite the intrinsic challenges of modeling unstable displacement, where by definition small perturbations can grow without bounds, the numerical method gives satisfactory results for all the cases studied.