979 resultados para Modeling approaches
Resumo:
Publicado em "AIP Conference Proceedings", Vol. 1648
Multimodel inference and multimodel averaging in empirical modeling of occupational exposure levels.
Resumo:
Empirical modeling of exposure levels has been popular for identifying exposure determinants in occupational hygiene. Traditional data-driven methods used to choose a model on which to base inferences have typically not accounted for the uncertainty linked to the process of selecting the final model. Several new approaches propose making statistical inferences from a set of plausible models rather than from a single model regarded as 'best'. This paper introduces the multimodel averaging approach described in the monograph by Burnham and Anderson. In their approach, a set of plausible models are defined a priori by taking into account the sample size and previous knowledge of variables influent on exposure levels. The Akaike information criterion is then calculated to evaluate the relative support of the data for each model, expressed as Akaike weight, to be interpreted as the probability of the model being the best approximating model given the model set. The model weights can then be used to rank models, quantify the evidence favoring one over another, perform multimodel prediction, estimate the relative influence of the potential predictors and estimate multimodel-averaged effects of determinants. The whole approach is illustrated with the analysis of a data set of 1500 volatile organic compound exposure levels collected by the Institute for work and health (Lausanne, Switzerland) over 20 years, each concentration having been divided by the relevant Swiss occupational exposure limit and log-transformed before analysis. Multimodel inference represents a promising procedure for modeling exposure levels that incorporates the notion that several models can be supported by the data and permits to evaluate to a certain extent model selection uncertainty, which is seldom mentioned in current practice.
Resumo:
Dengue fever is currently the most important arthropod-borne viral disease in Brazil. Mathematical modeling of disease dynamics is a very useful tool for the evaluation of control measures. To be used in decision-making, however, a mathematical model must be carefully parameterized and validated with epidemiological and entomological data. In this work, we developed a simple dengue model to answer three questions: (i) which parameters are worth pursuing in the field in order to develop a dengue transmission model for Brazilian cities; (ii) how vector density spatial heterogeneity influences control efforts; (iii) with a degree of uncertainty, what is the invasion potential of dengue virus type 4 (DEN-4) in Rio de Janeiro city. Our model consists of an expression for the basic reproductive number (R0) that incorporates vector density spatial heterogeneity. To deal with the uncertainty regarding parameter values, we parameterized the model using a priori probability density functions covering a range of plausible values for each parameter. Using the Latin Hypercube Sampling procedure, values for the parameters were generated. We conclude that, even in the presence of vector spatial heterogeneity, the two most important entomological parameters to be estimated in the field are the mortality rate and the extrinsic incubation period. The spatial heterogeneity of the vector population increases the risk of epidemics and makes the control strategies more complex. At last, we conclude that Rio de Janeiro is at risk of a DEN-4 invasion. Finally, we stress the point that epidemiologists, mathematicians, and entomologists need to interact more to find better approaches to the measuring and interpretation of the transmission dynamics of arthropod-borne diseases.
Resumo:
Recent studies have pointed out a similarity between tectonics and slope tectonic-induced structures. Numerous studies have demonstrated that structures and fabrics previously interpreted as of purely geodynamical origin are instead the result of large slope deformation, and this led in the past to erroneous interpretations. Nevertheless, their limit seems not clearly defined, but it is somehow transitional. Some studies point out continuity between failures developing at surface with upper crust movements. In this contribution, the main studies which examine the link between rock structures and slope movements are reviewed. The aspects regarding model and scale of observation are discussed together with the role of pre-existing weaknesses in the rock mass. As slope failures can develop through progressive failure, structures and their changes in time and space can be recognized. Furthermore, recognition of the origin of these structures can help in avoiding misinterpretations of regional geology. This also suggests the importance of integrating different slope movement classifications based on distribution and pattern of deformation and the application of structural geology techniques. A structural geology approach in the landslide community is a tool that can greatly support the hazard quantification and related risks, because most of the physical parameters, which are used for landslide modeling, are derived from geotechnical tests or the emerging geophysical approaches.
Resumo:
Piecewise linear models systems arise as mathematical models of systems in many practical applications, often from linearization for nonlinear systems. There are two main approaches of dealing with these systems according to their continuous or discrete-time aspects. We propose an approach which is based on the state transformation, more particularly the partition of the phase portrait in different regions where each subregion is modeled as a two-dimensional linear time invariant system. Then the Takagi-Sugeno model, which is a combination of local model is calculated. The simulation results show that the Alpha partition is well-suited for dealing with such a system
Resumo:
The activation of the specific immune response against tumor cells is based on the recognition by the CD8+ Cytotoxic Τ Lymphocytes (CTL), of antigenic peptides (p) presented at the surface of the cell by the class I major histocompatibility complex (MHC). The ability of the so-called T-Cell Receptors (TCR) to discriminate between self and non-self peptides constitutes the most important specific control mechanism against infected cells. The TCR/pMHC interaction has been the subject of much attention in cancer therapy since the design of the adoptive transfer approach, in which Τ lymphocytes presenting an interesting response against tumor cells are extracted from the patient, expanded in vitro, and reinfused after immunodepletion, possibly leading to cancer regression. In the last decade, major progress has been achieved by the introduction of engineered lypmhocytes. In the meantime, the understanding of the molecular aspects of the TCRpMHC interaction has become essential to guide in vitro and in vivo studies. In 1996, the determination of the first structure of a TCRpMHC complex by X-ray crystallography revealed the molecular basis of the interaction. Since then, molecular modeling techniques have taken advantage of crystal structures to study the conformational space of the complex, and understand the specificity of the recognition of the pMHC by the TCR. In the meantime, experimental techniques used to determine the sequences of TCR that bind to a pMHC complex have been used intensively, leading to the collection of large repertoires of TCR sequences that are specific for a given pMHC. There is a growing need for computational approaches capable of predicting the molecular interactions that occur upon TCR/pMHC binding without relying on the time consuming resolution of a crystal structure. This work presents new approaches to analyze the molecular principles that govern the recognition of the pMHC by the TCR and the subsequent activation of the T-cell. We first introduce TCRep 3D, a new method to model and study the structural properties of TCR repertoires, based on homology and ab initio modeling. We discuss the methodology in details, and demonstrate that it outperforms state of the art modeling methods in predicting relevant TCR conformations. Two successful applications of TCRep 3D that supported experimental studies on TCR repertoires are presented. Second, we present a rigid body study of TCRpMHC complexes that gives a fair insight on the TCR approach towards pMHC. We show that the binding mode of the TCR is correctly described by long-distance interactions. Finally, the last section is dedicated to a detailed analysis of an experimental hydrogen exchange study, which suggests that some regions of the constant domain of the TCR are subject to conformational changes upon binding to the pMHC. We propose a hypothesis of the structural signaling of TCR molecules leading to the activation of the T-cell. It is based on the analysis of correlated motions in the TCRpMHC structure. - L'activation de la réponse immunitaire spécifique dirigée contre les cellules tumorales est basée sur la reconnaissance par les Lymphocytes Τ Cytotoxiques (CTL), d'un peptide antigénique (p) présenté à la suface de la cellule par le complexe majeur d'histocompatibilité de classe I (MHC). La capacité des récepteurs des lymphocytes (TCR) à distinguer les peptides endogènes des peptides étrangers constitue le mécanisme de contrôle le plus important dirigé contre les cellules infectées. L'interaction entre le TCR et le pMHC est le sujet de beaucoup d'attention dans la thérapie du cancer, depuis la conception de la méthode de transfer adoptif: les lymphocytes capables d'une réponse importante contre les cellules tumorales sont extraits du patient, amplifiés in vitro, et réintroduits après immunosuppression. Il peut en résulter une régression du cancer. Ces dix dernières années, d'importants progrès ont été réalisés grâce à l'introduction de lymphocytes modifiés par génie génétique. En parallèle, la compréhension du TCRpMHC au niveau moléculaire est donc devenue essentielle pour soutenir les études in vitro et in vivo. En 1996, l'obtention de la première structure du complexe TCRpMHC à l'aide de la cristallographie par rayons X a révélé les bases moléculaires de l'interaction. Depuis lors, les techniques de modélisation moléculaire ont exploité les structures expérimentales pour comprendre la spécificité de la reconnaissance du pMHC par le TCR. Dans le même temps, de nouvelles techniques expérimentales permettant de déterminer la séquence de TCR spécifiques envers un pMHC donné, ont été largement exploitées. Ainsi, d'importants répertoires de TCR sont devenus disponibles, et il est plus que jamais nécessaire de développer des approches informatiques capables de prédire les interactions moléculaires qui ont lieu lors de la liaison du TCR au pMHC, et ce sans dépendre systématiquement de la résolution d'une structure cristalline. Ce mémoire présente une nouvelle approche pour analyser les principes moléculaires régissant la reconnaissance du pMHC par le TCR, et l'activation du lymphocyte qui en résulte. Dans un premier temps, nous présentons TCRep 3D, une nouvelle méthode basée sur les modélisations par homologie et ab initio, pour l'étude de propriétés structurales des répertoires de TCR. Le procédé est discuté en détails et comparé à des approches standard. Nous démontrons ainsi que TCRep 3D est le plus performant pour prédire des conformations pertinentes du TCR. Deux applications à des études expérimentales des répertoires TCR sont ensuite présentées. Dans la seconde partie de ce travail nous présentons une étude de complexes TCRpMHC qui donne un aperçu intéressant du mécanisme d'approche du pMHC par le TCR. Finalement, la dernière section se concentre sur l'analyse détaillée d'une étude expérimentale basée sur les échanges deuterium/hydrogène, dont les résultats révèlent que certaines régions clés du domaine constant du TCR sont sujettes à un changement conformationnel lors de la liaison au pMHC. Nous proposons une hypothèse pour la signalisation structurelle des TCR, menant à l'activation du lymphocyte. Celle-ci est basée sur l'analyse des mouvements corrélés observés dans la structure du TCRpMHC.
Resumo:
Abstract Accurate characterization of the spatial distribution of hydrological properties in heterogeneous aquifers at a range of scales is a key prerequisite for reliable modeling of subsurface contaminant transport, and is essential for designing effective and cost-efficient groundwater management and remediation strategies. To this end, high-resolution geophysical methods have shown significant potential to bridge a critical gap in subsurface resolution and coverage between traditional hydrological measurement techniques such as borehole log/core analyses and tracer or pumping tests. An important and still largely unresolved issue, however, is how to best quantitatively integrate geophysical data into a characterization study in order to estimate the spatial distribution of one or more pertinent hydrological parameters, thus improving hydrological predictions. Recognizing the importance of this issue, the aim of the research presented in this thesis was to first develop a strategy for the assimilation of several types of hydrogeophysical data having varying degrees of resolution, subsurface coverage, and sensitivity to the hydrologic parameter of interest. In this regard a novel simulated annealing (SA)-based conditional simulation approach was developed and then tested in its ability to generate realizations of porosity given crosshole ground-penetrating radar (GPR) and neutron porosity log data. This was done successfully for both synthetic and field data sets. A subsequent issue that needed to be addressed involved assessing the potential benefits and implications of the resulting porosity realizations in terms of groundwater flow and contaminant transport. This was investigated synthetically assuming first that the relationship between porosity and hydraulic conductivity was well-defined. Then, the relationship was itself investigated in the context of a calibration procedure using hypothetical tracer test data. Essentially, the relationship best predicting the observed tracer test measurements was determined given the geophysically derived porosity structure. Both of these investigations showed that the SA-based approach, in general, allows much more reliable hydrological predictions than other more elementary techniques considered. Further, the developed calibration procedure was seen to be very effective, even at the scale of tomographic resolution, for predictions of transport. This also held true at locations within the aquifer where only geophysical data were available. This is significant because the acquisition of hydrological tracer test measurements is clearly more complicated and expensive than the acquisition of geophysical measurements. Although the above methodologies were tested using porosity logs and GPR data, the findings are expected to remain valid for a large number of pertinent combinations of geophysical and borehole log data of comparable resolution and sensitivity to the hydrological target parameter. Moreover, the obtained results allow us to have confidence for future developments in integration methodologies for geophysical and hydrological data to improve the 3-D estimation of hydrological properties.
Resumo:
Cefepime is a broad-spectrum cephalosporin indicated for in-hospital treatment of severe infections. Acute neurotoxicity, an increasingly recognized adverse effect of this drug in an overdose, predominantly affects patients with reduced renal function. Although dialytic approaches have been advocated to treat this condition, their role in this indication remains unclear. We report the case of an 88-year-old female patient with impaired renal function who developed life-threatening neurologic symptoms during cefepime therapy. She was treated with two intermittent 3-hour high-flux, high-efficiency hemodialysis sessions. Serial pre-, post-, and peridialytic (pre- and postfilter) serum cefepime concentrations were measured. Pharmacokinetic modeling showed that this dialytic strategy allowed for serum cefepime concentrations to return to the estimated nontoxic range 15 hours earlier than would have been the case without an intervention. The patient made a full clinical recovery over the next 48 hours. We conclude that at least 1 session of intermittent hemodialysis may shorten the time to return to the nontoxic range in severe clinically patent intoxication. It should be considered early in its clinical course pending chemical confirmation, even in frail elderly patients. Careful dosage adjustment and a high index of suspicion are essential in this population.
Resumo:
Because of the increase in workplace automation and the diversification of industrial processes, workplaces have become more and more complex. The classical approaches used to address workplace hazard concerns, such as checklists or sequence models, are, therefore, of limited use in such complex systems. Moreover, because of the multifaceted nature of workplaces, the use of single-oriented methods, such as AEA (man oriented), FMEA (system oriented), or HAZOP (process oriented), is not satisfactory. The use of a dynamic modeling approach in order to allow multiple-oriented analyses may constitute an alternative to overcome this limitation. The qualitative modeling aspects of the MORM (man-machine occupational risk modeling) model are discussed in this article. The model, realized on an object-oriented Petri net tool (CO-OPN), has been developed to simulate and analyze industrial processes in an OH&S perspective. The industrial process is modeled as a set of interconnected subnets (state spaces), which describe its constitutive machines. Process-related factors are introduced, in an explicit way, through machine interconnections and flow properties. While man-machine interactions are modeled as triggering events for the state spaces of the machines, the CREAM cognitive behavior model is used in order to establish the relevant triggering events. In the CO-OPN formalism, the model is expressed as a set of interconnected CO-OPN objects defined over data types expressing the measure attached to the flow of entities transiting through the machines. Constraints on the measures assigned to these entities are used to determine the state changes in each machine. Interconnecting machines implies the composition of such flow and consequently the interconnection of the measure constraints. This is reflected by the construction of constraint enrichment hierarchies, which can be used for simulation and analysis optimization in a clear mathematical framework. The use of Petri nets to perform multiple-oriented analysis opens perspectives in the field of industrial risk management. It may significantly reduce the duration of the assessment process. But, most of all, it opens perspectives in the field of risk comparisons and integrated risk management. Moreover, because of the generic nature of the model and tool used, the same concepts and patterns may be used to model a wide range of systems and application fields.
Resumo:
RESUME Les évidences montrant que les changements globaux affectent la biodiversité s'accumulent. Les facteurs les plus influant dans ce processus sont les changements et destructions d'habitat, l'expansion des espèces envahissantes et l'impact des changements climatiques. Une évaluation pertinente de la réponse des espèces face à ces changements est essentielle pour proposer des mesures permettant de réduire le déclin actuel de la biodiversité. La modélisation de la répartition d'espèces basée sur la niche (NBM) est l'un des rares outils permettant cette évaluation. Néanmoins, leur application dans le contexte des changements globaux repose sur des hypothèses restrictives et demande une interprétation critique. Ce travail présente une série d'études de cas investiguant les possibilités et limitations de cette approche pour prédire l'impact des changements globaux. Deux études traitant des menaces sur les espèces rares et en danger d'extinction sont présentées. Les caractéristiques éco-géographiques de 118 plantes avec un haut degré de priorité de conservation sont revues. La prévalence des types de rareté sont analysées en relation avec leur risque d'extinction UICN. La revue souligne l'importance de la conservation à l'échelle régionale. Une évaluation de la rareté à échelle globale peut être trompeuse pour certaine espèces car elle ne tient pas en compte des différents degrés de rareté que présente une espèce à différentes échelles spatiales. La deuxième étude test une approche pour améliorer l'échantillonnage d'espèces rares en incluant des phases itératives de modélisation et d'échantillonnage sur le terrain. L'application de l'approche en biologie de la conservation (illustrée ici par le cas du chardon bleu, Eryngium alpinum), permettrait de réduire le temps et les coûts d'échantillonnage. Deux études sur l'impact des changements climatiques sur la faune et la flore africaine sont présentées. La première étude évalue la sensibilité de 227 mammifères africains face aux climatiques d'ici 2050. Elle montre qu'un nombre important d'espèces pourrait être bientôt en danger d'extinction et que les parcs nationaux africains (principalement ceux situé en milieux xériques) pourraient ne pas remplir leur mandat de protection de la biodiversité dans le futur. La seconde étude modélise l'aire de répartition en 2050 de 975 espèces de plantes endémiques du sud de l'Afrique. L'étude propose l'inclusion de méthodes améliorant la prédiction des risques liés aux changements climatiques. Elle propose également une méthode pour estimer a priori la sensibilité d'une espèce aux changements climatiques à partir de ses propriétés écologiques et des caractéristiques de son aire de répartition. Trois études illustrent l'utilisation des modèles dans l'étude des invasions biologiques. Une première étude relate l'expansion de la laitue sáuvage (Lactuca serriola) vers le nord de l'Europe en lien avec les changements du climat depuis 250 ans. La deuxième étude analyse le potentiel d'invasion de la centaurée tachetée (Centaures maculosa), une mauvaise herbe importée en Amérique du nord vers 1890. L'étude apporte la preuve qu'une espèce envahissante peut occuper une niche climatique différente après introduction sur un autre continent. Les modèles basés sur l'aire native prédisent de manière incorrecte l'entier de l'aire envahie mais permettent de prévoir les aires d'introductions potentielles. Une méthode alternative, incluant la calibration du modèle à partir des deux aires où l'espèce est présente, est proposée pour améliorer les prédictions de l'invasion en Amérique du nord. Je présente finalement une revue de la littérature sur la dynamique de la niche écologique dans le temps et l'espace. Elle synthétise les récents développements théoriques concernant le conservatisme de la niche et propose des solutions pour améliorer la pertinence des prédictions d'impact des changements climatiques et des invasions biologiques. SUMMARY Evidences are accumulating that biodiversity is facing the effects of global change. The most influential drivers of change in ecosystems are land-use change, alien species invasions and climate change impacts. Accurate projections of species' responses to these changes are needed to propose mitigation measures to slow down the on-going erosion of biodiversity. Niche-based models (NBM) currently represent one of the only tools for such projections. However, their application in the context of global changes relies on restrictive assumptions, calling for cautious interpretations. In this thesis I aim to assess the effectiveness and shortcomings of niche-based models for the study of global change impacts on biodiversity through the investigation of specific, unsolved limitations and suggestion of new approaches. Two studies investigating threats to rare and endangered plants are presented. I review the ecogeographic characteristic of 118 endangered plants with high conservation priority in Switzerland. The prevalence of rarity types among plant species is analyzed in relation to IUCN extinction risks. The review underlines the importance of regional vs. global conservation and shows that a global assessment of rarity might be misleading for some species because it can fail to account for different degrees of rarity at a variety of spatial scales. The second study tests a modeling framework including iterative steps of modeling and field surveys to improve the sampling of rare species. The approach is illustrated with a rare alpine plant, Eryngium alpinum and shows promise for complementing conservation practices and reducing sampling costs. Two studies illustrate the impacts of climate change on African taxa. The first one assesses the sensitivity of 277 mammals at African scale to climate change by 2050 in terms of species richness and turnover. It shows that a substantial number of species could be critically endangered in the future. National parks situated in xeric ecosystems are not expected to meet their mandate of protecting current species diversity in the future. The second study model the distribution in 2050 of 975 endemic plant species in southern Africa. The study proposes the inclusion of new methodological insights improving the accuracy and ecological realism of predictions of global changes studies. It also investigates the possibility to estimate a priori the sensitivity of a species to climate change from the geographical distribution and ecological proprieties of the species. Three studies illustrate the application of NBM in the study of biological invasions. The first one investigates the Northwards expansion of Lactuca serriola L. in Europe during the last 250 years in relation with climate changes. In the last two decades, the species could not track climate change due to non climatic influences. A second study analyses the potential invasion extent of spotted knapweed, a European weed first introduced into North America in the 1890s. The study provides one of the first empirical evidence that an invasive species can occupy climatically distinct niche spaces following its introduction into a new area. Models fail to predict the current full extent of the invasion, but correctly predict areas of introduction. An alternative approach, involving the calibration of models with pooled data from both ranges, is proposed to improve predictions of the extent of invasion on models based solely on the native range. I finally present a review on the dynamic nature of ecological niches in space and time. It synthesizes the recent theoretical developments to the niche conservatism issues and proposes solutions to improve confidence in NBM predictions of the impacts of climate change and species invasions on species distributions.
Resumo:
The purpose of the present article is to take stock of a recent exchange in Organizational Research Methods between critics (Rönkkö & Evermann, 2013) and proponents (Henseler et al., 2014) of partial least squares path modeling (PLS-PM). The two target articles were centered around six principal issues, namely whether PLS-PM: (1) can be truly characterized as a technique for structural equation modeling (SEM); (2) is able to correct for measurement error; (3) can be used to validate measurement models; (4) accommodates small sample sizes; (5) is able to provide null hypothesis tests for path coefficients; and (6) can be employed in an exploratory, model-building fashion. We summarize and elaborate further on the key arguments underlying the exchange, drawing from the broader methodological and statistical literature in order to offer additional thoughts concerning the utility of PLS-PM and ways in which the technique might be improved. We conclude with recommendations as to whether and how PLS-PM serves as a viable contender to SEM approaches for estimating and evaluating theoretical models.
Resumo:
Self-potential (SP) data are of interest to vadose zone hydrology because of their direct sensitivity to water flow and ionic transport. There is unfortunately little consensus in the literature about how to best model SP data under partially saturated conditions, and different approaches (often supported by one laboratory data set alone) have been proposed. We argue that this lack of agreement can largely be traced to electrode effects that have not been properly taken into account. A series of drainage and imbibition experiments were considered in which we found that previously proposed approaches to remove electrode effects were unlikely to provide adequate corrections. Instead, we explicitly modeled the electrode effects together with classical SP contributions using a flow and transport model. The simulated data agreed overall with the observed SP signals and allowed decomposing the different signal contributions to analyze them separately. After reviewing other published experimental data, we suggest that most of them include electrode effects that have not been properly taken into account. Our results suggest that previously presented SP theory works well when considering the modeling uncertainties presently associated with electrode effects. Additional work is warranted to not only develop suitable electrodes for laboratory experiments but also to assure that associated electrode effects that appear inevitable in longer term experiments are predictable, so that they can be incorporated into the modeling framework.
Resumo:
Modeling the mechanisms that determine how humans and other agents choose among different behavioral and cognitive processes-be they strategies, routines, actions, or operators-represents a paramount theoretical stumbling block across disciplines, ranging from the cognitive and decision sciences to economics, biology, and machine learning. By using the cognitive and decision sciences as a case study, we provide an introduction to what is also known as the strategy selection problem. First, we explain why many researchers assume humans and other animals to come equipped with a repertoire of behavioral and cognitive processes. Second, we expose three descriptive, predictive, and prescriptive challenges that are common to all disciplines which aim to model the choice among these processes. Third, we give an overview of different approaches to strategy selection. These include cost‐benefit, ecological, learning, memory, unified, connectionist, sequential sampling, and maximization approaches. We conclude by pointing to opportunities for future research and by stressing that the selection problem is far from being resolved.
Resumo:
The paper presents some contemporary approaches to spatial environmental data analysis. The main topics are concentrated on the decision-oriented problems of environmental spatial data mining and modeling: valorization and representativity of data with the help of exploratory data analysis, spatial predictions, probabilistic and risk mapping, development and application of conditional stochastic simulation models. The innovative part of the paper presents integrated/hybrid model-machine learning (ML) residuals sequential simulations-MLRSS. The models are based on multilayer perceptron and support vector regression ML algorithms used for modeling long-range spatial trends and sequential simulations of the residuals. NIL algorithms deliver non-linear solution for the spatial non-stationary problems, which are difficult for geostatistical approach. Geostatistical tools (variography) are used to characterize performance of ML algorithms, by analyzing quality and quantity of the spatially structured information extracted from data with ML algorithms. Sequential simulations provide efficient assessment of uncertainty and spatial variability. Case study from the Chernobyl fallouts illustrates the performance of the proposed model. It is shown that probability mapping, provided by the combination of ML data driven and geostatistical model based approaches, can be efficiently used in decision-making process. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
The software development industry is constantly evolving. The rise of the agile methodologies in the late 1990s, and new development tools and technologies require growing attention for everybody working within this industry. The organizations have, however, had a mixture of various processes and different process languages since a standard software development process language has not been available. A promising process meta-model called Software & Systems Process Engineering Meta- Model (SPEM) 2.0 has been released recently. This is applied by tools such as Eclipse Process Framework Composer, which is designed for implementing and maintaining processes and method content. Its aim is to support a broad variety of project types and development styles. This thesis presents the concepts of software processes, models, traditional and agile approaches, method engineering, and software process improvement. Some of the most well-known methodologies (RUP, OpenUP, OpenMethod, XP and Scrum) are also introduced with a comparison provided between them. The main focus is on the Eclipse Process Framework and SPEM 2.0, their capabilities, usage and modeling. As a proof of concept, I present a case study of modeling OpenMethod with EPF Composer and SPEM 2.0. The results show that the new meta-model and tool have made it possible to easily manage method content, publish versions with customized content, and connect project tools (such as MS Project) with the process content. The software process modeling also acts as a process improvement activity.