867 resultados para Model of the semantic fields


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study of the effects of spatially uniform fields on the steady-state properties of Axelrod's model has yielded plenty of counterintuitive results. Here, we reexamine the impact of this type of field for a selection of parameters such that the field-free steady state of the model is heterogeneous or multicultural. Analyses of both one- and two-dimensional versions of Axelrod's model indicate that the steady state remains heterogeneous regardless of the value of the field strength. Turning on the field leads to a discontinuous decrease on the number of cultural domains, which we argue is due to the instability of zero-field heterogeneous absorbing configurations. We find, however, that spatially nonuniform fields that implement a consensus rule among the neighborhood of the agents enforce homogenization. Although the overall effects of the fields are essentially the same irrespective of the dimensionality of the model, we argue that the dimensionality has a significant impact on the stability of the field-free homogeneous steady state.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Galaxy clusters occupy a special position in the cosmic hierarchy as they are the largest bound structures in the Universe. There is now general agreement on a hierarchical picture for the formation of cosmic structures, in which galaxy clusters are supposed to form by accretion of matter and merging between smaller units. During merger events, shocks are driven by the gravity of the dark matter in the diffuse barionic component, which is heated up to the observed temperature. Radio and hard-X ray observations have discovered non-thermal components mixed with the thermal Intra Cluster Medium (ICM) and this is of great importance as it calls for a “revision” of the physics of the ICM. The bulk of present information comes from the radio observations which discovered an increasing number of Mpcsized emissions from the ICM, Radio Halos (at the cluster center) and Radio Relics (at the cluster periphery). These sources are due to synchrotron emission from ultra relativistic electrons diffusing through µG turbulent magnetic fields. Radio Halos are the most spectacular evidence of non-thermal components in the ICM and understanding the origin and evolution of these sources represents one of the most challenging goal of the theory of the ICM. Cluster mergers are the most energetic events in the Universe and a fraction of the energy dissipated during these mergers could be channelled into the amplification of the magnetic fields and into the acceleration of high energy particles via shocks and turbulence driven by these mergers. Present observations of Radio Halos (and possibly of hard X-rays) can be best interpreted in terms of the reacceleration scenario in which MHD turbulence injected during these cluster mergers re-accelerates high energy particles in the ICM. The physics involved in this scenario is very complex and model details are difficult to test, however this model clearly predicts some simple properties of Radio Halos (and resulting IC emission in the hard X-ray band) which are almost independent of the details of the adopted physics. In particular in the re-acceleration scenario MHD turbulence is injected and dissipated during cluster mergers and thus Radio Halos (and also the resulting hard X-ray IC emission) should be transient phenomena (with a typical lifetime <» 1 Gyr) associated with dynamically disturbed clusters. The physics of the re-acceleration scenario should produce an unavoidable cut-off in the spectrum of the re-accelerated electrons, which is due to the balance between turbulent acceleration and radiative losses. The energy at which this cut-off occurs, and thus the maximum frequency at which synchrotron radiation is produced, depends essentially on the efficiency of the acceleration mechanism so that observations at high frequencies are expected to catch only the most efficient phenomena while, in principle, low frequency radio surveys may found these phenomena much common in the Universe. These basic properties should leave an important imprint in the statistical properties of Radio Halos (and of non-thermal phenomena in general) which, however, have not been addressed yet by present modellings. The main focus of this PhD thesis is to calculate, for the first time, the expected statistics of Radio Halos in the context of the re-acceleration scenario. In particular, we shall address the following main questions: • Is it possible to model “self-consistently” the evolution of these sources together with that of the parent clusters? • How the occurrence of Radio Halos is expected to change with cluster mass and to evolve with redshift? How the efficiency to catch Radio Halos in galaxy clusters changes with the observing radio frequency? • How many Radio Halos are expected to form in the Universe? At which redshift is expected the bulk of these sources? • Is it possible to reproduce in the re-acceleration scenario the observed occurrence and number of Radio Halos in the Universe and the observed correlations between thermal and non-thermal properties of galaxy clusters? • Is it possible to constrain the magnetic field intensity and profile in galaxy clusters and the energetic of turbulence in the ICM from the comparison between model expectations and observations? Several astrophysical ingredients are necessary to model the evolution and statistical properties of Radio Halos in the context of re-acceleration model and to address the points given above. For these reason we deserve some space in this PhD thesis to review the important aspects of the physics of the ICM which are of interest to catch our goals. In Chapt. 1 we discuss the physics of galaxy clusters, and in particular, the clusters formation process; in Chapt. 2 we review the main observational properties of non-thermal components in the ICM; and in Chapt. 3 we focus on the physics of magnetic field and of particle acceleration in galaxy clusters. As a relevant application, the theory of Alfv´enic particle acceleration is applied in Chapt. 4 where we report the most important results from calculations we have done in the framework of the re-acceleration scenario. In this Chapter we show that a fraction of the energy of fluid turbulence driven in the ICM by the cluster mergers can be channelled into the injection of Alfv´en waves at small scales and that these waves can efficiently re-accelerate particles and trigger Radio Halos and hard X-ray emission. The main part of this PhD work, the calculation of the statistical properties of Radio Halos and non-thermal phenomena as expected in the context of the re-acceleration model and their comparison with observations, is presented in Chapts.5, 6, 7 and 8. In Chapt.5 we present a first approach to semi-analytical calculations of statistical properties of giant Radio Halos. The main goal of this Chapter is to model cluster formation, the injection of turbulence in the ICM and the resulting particle acceleration process. We adopt the semi–analytic extended Press & Schechter (PS) theory to follow the formation of a large synthetic population of galaxy clusters and assume that during a merger a fraction of the PdV work done by the infalling subclusters in passing through the most massive one is injected in the form of magnetosonic waves. Then the processes of stochastic acceleration of the relativistic electrons by these waves and the properties of the ensuing synchrotron (Radio Halos) and inverse Compton (IC, hard X-ray) emission of merging clusters are computed under the assumption of a constant rms average magnetic field strength in emitting volume. The main finding of these calculations is that giant Radio Halos are naturally expected only in the more massive clusters, and that the expected fraction of clusters with Radio Halos is consistent with the observed one. In Chapt. 6 we extend the previous calculations by including a scaling of the magnetic field strength with cluster mass. The inclusion of this scaling allows us to derive the expected correlations between the synchrotron radio power of Radio Halos and the X-ray properties (T, LX) and mass of the hosting clusters. For the first time, we show that these correlations, calculated in the context of the re-acceleration model, are consistent with the observed ones for typical µG strengths of the average B intensity in massive clusters. The calculations presented in this Chapter allow us to derive the evolution of the probability to form Radio Halos as a function of the cluster mass and redshift. The most relevant finding presented in this Chapter is that the luminosity functions of giant Radio Halos at 1.4 GHz are expected to peak around a radio power » 1024 W/Hz and to flatten (or cut-off) at lower radio powers because of the decrease of the electron re-acceleration efficiency in smaller galaxy clusters. In Chapt. 6 we also derive the expected number counts of Radio Halos and compare them with available observations: we claim that » 100 Radio Halos in the Universe can be observed at 1.4 GHz with deep surveys, while more than 1000 Radio Halos are expected to be discovered in the next future by LOFAR at 150 MHz. This is the first (and so far unique) model expectation for the number counts of Radio Halos at lower frequency and allows to design future radio surveys. Based on the results of Chapt. 6, in Chapt.7 we present a work in progress on a “revision” of the occurrence of Radio Halos. We combine past results from the NVSS radio survey (z » 0.05 − 0.2) with our ongoing GMRT Radio Halos Pointed Observations of 50 X-ray luminous galaxy clusters (at z » 0.2−0.4) and discuss the possibility to test our model expectations with the number counts of Radio Halos at z » 0.05 − 0.4. The most relevant limitation in the calculations presented in Chapt. 5 and 6 is the assumption of an “averaged” size of Radio Halos independently of their radio luminosity and of the mass of the parent clusters. This assumption cannot be released in the context of the PS formalism used to describe the formation process of clusters, while a more detailed analysis of the physics of cluster mergers and of the injection process of turbulence in the ICM would require an approach based on numerical (possible MHD) simulations of a very large volume of the Universe which is however well beyond the aim of this PhD thesis. On the other hand, in Chapt.8 we report our discovery of novel correlations between the size (RH) of Radio Halos and their radio power and between RH and the cluster mass within the Radio Halo region, MH. In particular this last “geometrical” MH − RH correlation allows us to “observationally” overcome the limitation of the “average” size of Radio Halos. Thus in this Chapter, by making use of this “geometrical” correlation and of a simplified form of the re-acceleration model based on the results of Chapt. 5 and 6 we are able to discuss expected correlations between the synchrotron power and the thermal cluster quantities relative to the radio emitting region. This is a new powerful tool of investigation and we show that all the observed correlations (PR − RH, PR − MH, PR − T, PR − LX, . . . ) now become well understood in the context of the re-acceleration model. In addition, we find that observationally the size of Radio Halos scales non-linearly with the virial radius of the parent cluster, and this immediately means that the fraction of the cluster volume which is radio emitting increases with cluster mass and thus that the non-thermal component in clusters is not self-similar.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The atmosphere is a global influence on the movement of heat and humidity between the continents, and thus significantly affects climate variability. Information about atmospheric circulation are of major importance for the understanding of different climatic conditions. Dust deposits from maar lakes and dry maars from the Eifel Volcanic Field (Germany) are therefore used as proxy data for the reconstruction of past aeolian dynamics.rnrnIn this thesis past two sediment cores from the Eifel region are examined: the core SM3 from Lake Schalkenmehren and the core DE3 from the Dehner dry maar. Both cores contain the tephra of the Laacher See eruption, which is dated to 12,900 before present. Taken together the cores cover the last 60,000 years: SM3 the Holocene and DE3 the marine isotope stages MIS-3 and MIS-2, respectively. The frequencies of glacial dust storm events and their paleo wind direction are detected by high resolution grain size and provenance analysis of the lake sediments. Therefore two different methods are applied: geochemical measurements of the sediment using µXRF-scanning and the particle analysis method RADIUS (rapid particle analysis of digital images by ultra-high-resolution scanning of thin sections).rnIt is shown that single dust layers in the lake sediment are characterized by an increased content of aeolian transported carbonate particles. The limestone-bearing Eifel-North-South zone is the most likely source for the carbonate rich aeolian dust in the lake sediments of the Dehner dry maar. The dry maar is located on the western side of the Eifel-North-South zone. Thus, carbonate rich aeolian sediment is most likely to be transported towards the Dehner dry maar within easterly winds. A methodology is developed which limits the detection to the aeolian transported carbonate particles in the sediment, the RADIUS-carbonate module.rnrnIn summary, during the marine isotope stage MIS-3 the storm frequency and the east wind frequency are both increased in comparison to MIS-2. These results leads to the suggestion that atmospheric circulation was affected by more turbulent conditions during MIS-3 in comparison to the more stable atmospheric circulation during the full glacial conditions of MIS-2.rnThe results of the investigations of the dust records are finally evaluated in relation a study of atmospheric general circulation models for a comprehensive interpretation. Here, AGCM experiments (ECHAM3 and ECHAM4) with different prescribed SST patterns are used to develop a synoptic interpretation of long-persisting east wind conditions and of east wind storm events, which are suggested to lead to an enhanced accumulation of sediment being transported by easterly winds to the proxy site of the Dehner dry maar.rnrnThe basic observations made on the proxy record are also illustrated in the 10 m-wind vectors in the different model experiments under glacial conditions with different prescribed sea surface temperature patterns. Furthermore, the analysis of long-persisting east wind conditions in the AGCM data shows a stronger seasonality under glacial conditions: all the different experiments are characterized by an increase of the relative importance of the LEWIC during spring and summer. The different glacial experiments consistently show a shift from a long-lasting high over the Baltic Sea towards the NW, directly above the Scandinavian Ice Sheet, together with contemporary enhanced westerly circulation over the North Atlantic.rnrnThis thesis is a comprehensive analysis of atmospheric circulation patterns during the last glacial period. It has been possible to reconstruct important elements of the glacial paleo climate in Central Europe. While the proxy data from sediment cores lead to a binary signal of the wind direction changes (east versus west wind), a synoptic interpretation using atmospheric circulation models is successful. This shows a possible distribution of high and low pressure areas and thus the direction and strength of wind fields which have the capacity to transport dust. In conclusion, the combination of numerical models, to enhance understanding of processes in the climate system, with proxy data from the environmental record is the key to a comprehensive approach to paleo climatic reconstruction.rn

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A semantic approach towards political conflict first emerged in the 1930s and provides the methodological foundations for the description of political conflicts, in particular as the correlation between the language of description and reality. Any military or political confrontation presupposes axiological, conceptual and ideological confrontation. The form of adequate description can only be comprehended if the characteristic features of its language (structure) and thesaurus are revealed. Admitting the possibility of different descriptions implies the necessity of analysing this possible ambiguity, i.e. the characteristic features of the language which enable us to form various statements, including mutually exclusive ones. The insoluble task of finding a middle ground between the viewpoints of the conflicting parties should be replaced by soluble procedures of explaining and assessing the conflicting axiologies. For the description of conflict situations, when it is essential to represent various positions within a uniform system, an apparatus of model semantics seems to be the most appropriate one both for generating alternatives and for bringing them together in a modal system of a world in which procedures of transition from one world to another (i.e. the transworld compatibility between them) are also reflected. Reality is reconstructed not as a sort of middle ground between the mutually exclusive approaches nor as their sum, but as a result of the overlapping of various worlds and the procedures of transition from one state of affairs to another. The description of a conflict is therefore seen as a system of worlds connected by modal relations, with a system of worlds emerging as a reality to be described. This approach makes it possible to describe the processes from the points of view of the participating parties and, at the same time, to reveal their basic attitudes. The main idea of this research is shown by the problems analysed: the description of conflict as methodology; language and behaviour (general problems of semiotic description), the logico-semantic analysis of the notions of "problem and conflict", "Genesis and Chronology", "the recurrent model of the (historical) explanation and interpretation of the conflict". Zolyan used data on the Karabagh conflict to demonstrate the dependence of the structure of semio-cultural codes on current political development and considered post-soviet history as a semio-cultural problem. He sought to consider and reveal the logic of manipulations with history, and proposed the logic of preferences as a possible instrument for achieving compromise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Professor Sir David R. Cox (DRC) is widely acknowledged as among the most important scientists of the second half of the twentieth century. He inherited the mantle of statistical science from Pearson and Fisher, advanced their ideas, and translated statistical theory into practice so as to forever change the application of statistics in many fields, but especially biology and medicine. The logistic and proportional hazards models he substantially developed, are arguably among the most influential biostatistical methods in current practice. This paper looks forward over the period from DRC's 80th to 90th birthdays, to speculate about the future of biostatistics, drawing lessons from DRC's contributions along the way. We consider "Cox's model" of biostatistics, an approach to statistical science that: formulates scientific questions or quantities in terms of parameters gamma in probability models f(y; gamma) that represent in a parsimonious fashion, the underlying scientific mechanisms (Cox, 1997); partition the parameters gamma = theta, eta into a subset of interest theta and other "nuisance parameters" eta necessary to complete the probability distribution (Cox and Hinkley, 1974); develops methods of inference about the scientific quantities that depend as little as possible upon the nuisance parameters (Barndorff-Nielsen and Cox, 1989); and thinks critically about the appropriate conditional distribution on which to base infrences. We briefly review exciting biomedical and public health challenges that are capable of driving statistical developments in the next decade. We discuss the statistical models and model-based inferences central to the CM approach, contrasting them with computationally-intensive strategies for prediction and inference advocated by Breiman and others (e.g. Breiman, 2001) and to more traditional design-based methods of inference (Fisher, 1935). We discuss the hierarchical (multi-level) model as an example of the future challanges and opportunities for model-based inference. We then consider the role of conditional inference, a second key element of the CM. Recent examples from genetics are used to illustrate these ideas. Finally, the paper examines causal inference and statistical computing, two other topics we believe will be central to biostatistics research and practice in the coming decade. Throughout the paper, we attempt to indicate how DRC's work and the "Cox Model" have set a standard of excellence to which all can aspire in the future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stemmatology, or the reconstruction of the transmission history of texts, is a field that stands particularly to gain from digital methods. Many scholars already take stemmatic approaches that rely heavily on computational analysis of the collated text (e.g. Robinson and O’Hara 1996; Salemans 2000; Heikkilä 2005; Windram et al. 2008 among many others). Although there is great value in computationally assisted stemmatology, providing as it does a reproducible result and allowing access to the relevant methodological process in related fields such as evolutionary biology, computational stemmatics is not without its critics. The current state-of-the-art effectively forces scholars to choose between a preconceived judgment of the significance of textual differences (the Lachmannian or neo-Lachmannian approach, and the weighted phylogenetic approach) or to make no judgment at all (the unweighted phylogenetic approach). Some basis for judgment of the significance of variation is sorely needed for medieval text criticism in particular. By this, we mean that there is a need for a statistical empirical profile of the text-genealogical significance of the different sorts of variation in different sorts of medieval texts. The rules that apply to copies of Greek and Latin classics may not apply to copies of medieval Dutch story collections; the practices of copying authoritative texts such as the Bible will most likely have been different from the practices of copying the Lives of local saints and other commonly adapted texts. It is nevertheless imperative that we have a consistent, flexible, and analytically tractable model for capturing these phenomena of transmission. In this article, we present a computational model that captures most of the phenomena of text variation, and a method for analysis of one or more stemma hypotheses against the variation model. We apply this method to three ‘artificial traditions’ (i.e. texts copied under laboratory conditions by scholars to study the properties of text variation) and four genuine medieval traditions whose transmission history is known or deduced in varying degrees. Although our findings are necessarily limited by the small number of texts at our disposal, we demonstrate here some of the wide variety of calculations that can be made using our model. Certain of our results call sharply into question the utility of excluding ‘trivial’ variation such as orthographic and spelling changes from stemmatic analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Simulating surface wind over complex terrain is a challenge in regional climate modelling. Therefore, this study aims at identifying a set-up of the Weather Research and Forecasting Model (WRF) model that minimises system- atic errors of surface winds in hindcast simulations. Major factors of the model configuration are tested to find a suitable set-up: the horizontal resolution, the planetary boundary layer (PBL) parameterisation scheme and the way the WRF is nested to the driving data set. Hence, a number of sensitivity simulations at a spatial resolution of 2 km are carried out and compared to observations. Given the importance of wind storms, the analysis is based on case studies of 24 historical wind storms that caused great economic damage in Switzerland. Each of these events is downscaled using eight different model set-ups, but sharing the same driving data set. The results show that the lack of representation of the unresolved topography leads to a general overestimation of wind speed in WRF. However, this bias can be substantially reduced by using a PBL scheme that explicitly considers the effects of non-resolved topography, which also improves the spatial structure of wind speed over Switzerland. The wind direction, although generally well reproduced, is not very sensitive to the PBL scheme. Further sensitivity tests include four types of nesting methods: nesting only at the boundaries of the outermost domain, analysis nudging, spectral nudging, and the so-called re-forecast method, where the simulation is frequently restarted. These simulations show that restricting the freedom of the model to develop large-scale disturbances slightly increases the temporal agreement with the observations, at the same time that it further reduces the overestimation of wind speed, especially for maximum wind peaks. The model performance is also evaluated in the outermost domains, where the resolution is coarser. The results demonstrate the important role of horizontal resolution, where the step from 6 to 2 km significantly improves model performance. In summary, the combination of a grid size of 2 km, the non-local PBL scheme modified to explicitly account for non-resolved orography, as well as analysis or spectral nudging, is a superior combination when dynamical downscaling is aimed at reproducing real wind fields.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the transition from unitary to dissipative dynamics in the relativistic O(N) vector model with the λ(φ2)2 interaction using the nonperturbative functional renormalization group in the real-time formalism. In thermal equilibrium, the theory is characterized by two scales, the interaction range for coherent scattering of particles and the mean free path determined by the rate of incoherent collisions with excitations in the thermal medium. Their competition determines the renormalization group flow and the effective dynamics of the model. Here we quantify the dynamic properties of the model in terms of the scale-dependent dynamic critical exponent z in the limit of large temperatures and in 2≤d≤4 spatial dimensions. We contrast our results to the behavior expected at vanishing temperature and address the question of the appropriate dynamic universality class for the given microscopic theory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have developed a new fully kinetic electrostatic simulation, HYBes, to study how the lunar landscape affects the electric potential and plasma distributions near the surface and the properties of lifted dust. The model embodies new techniques that can be used in various types of physical environments and situations. We demonstrate the applicability of the new model in a situation involving three charged particle species, which are solar wind electrons and protons, and lunar photoelectrons. Properties of dust are studied with test particle simulations by using the electric fields derived from the HYBes model. Simulations show the high importance of the plasma and the electric potential near the surface. For comparison, the electric potential gradients near the landscapes with feature sizes of the order of the Debye length are much larger than those near a flat surface at different solar zenith angles. Furthermore, dust test particle simulations indicate that the landscape relief influences the dust location over the surface. The study suggests that the local landscape has to be taken into account when the distributions of plasma and dust above lunar surface are studied. The HYBes model can be applied not only at the Moon but also on a wide range of airless planetary objects such as Mercury, other planetary moons, asteroids, and nonactive comets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Interpretation of ice-core records requires accurate knowledge of the past and present surface topography and stress-strain fields. The European Project for Ice Coring in Antarctica (EPICA) drilling site (0.0684° E and 75.0025° S, 2891.7 m) in Dronning Maud Land, Antarctica, is located in the immediate vicinity of a transient and splitting ice divide. A digital elevation model is determined from the combination of kinematic GPS measurements with the GLAS12 data sets from the ICESat satellite. Based on a network of stakes, surveyed with static GPS, the velocity field around the EDML drilling site is calculated. The annual mean velocity magnitude of 12 survey points amounts to 0.74 m/a. Flow directions mainly vary according to their distance from the ice divide. Surface strain rates are determined from a pentagon-shaped stake network with one center point, close to the drilling site. The strain field is characterised by along flow compression, lateral dilatation, and vertical layer thinning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Optimum conditions were selected for chromatographic separation of model mixtures of C12-C40 n-alkanes. For one of samples of hydrothermal deposits extraction conditions of hydrocarbons were studied and a sample preparation procedure was selected. The procedure was proposed to determine n-alkanes in samples of hydrothermal deposits by means of gas chromatography - mass spectrometry (GC-MS). Detection limit for n-alkanes was 3x10**-9 to 10**-8% depending on components. On the basis of the proposed procedure composition of n-alkanes was studied in samples of hydrothermal deposits collected at the Mid-Atlantic Ridge (Broken Spur, Lost City, and Rainbow hydrothermal fields). Analyses showed that samples contained C14-C35 n-alkanes. Concentrations of the n-alkanes were rather low and varied from 0.002 to 0.038 µg/g. Hypotheses concerning genesis of identified n-alkanes were offered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study shows the air flow behavior through the geometry of a freight truck inside a AF6109 wind tunnel with the purpose to predict the speed, pressure and turbulence fields made by the air flow, to decrease the aerodynamic resistance, to calculate the dragging coefficient, to evaluate the aerodynamics of the geometry of the prototype using the CFD technique and to compare the results of the simulation with the results obtained experimentally with the “PETER 739 HAULER” scaled freight truck model located on the floor of the test chamber. The Geometry went through a numerical simulation process using the CFX 5,7. The obtained results showed the behavior of the air flow through the test chamber, and also it showed the variations of speed and pressure at the exit of the chamber and the calculations of the coefficient and the dragging force on the geometry of the freight truck. The evaluation of the aerodynamics showed that the aerodynamic deflector is a device that helped the reduction the dragging produced in a significant way by the air. Furthermore, the dragging coefficient and force on the prototype freight truck could be estimated establishing an incomplete similarity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE The decision-making process plays a key role in organizations. Every decision-making process produces a final choice that may or may not prompt action. Recurrently, decision makers find themselves in the dichotomous question of following a traditional sequence decision-making process where the output of a decision is used as the input of the next stage of the decision, or following a joint decision-making approach where several decisions are taken simultaneously. The implication of the decision-making process will impact different players of the organization. The choice of the decision- making approach becomes difficult to find, even with the current literature and practitioners’ knowledge. The pursuit of better ways for making decisions has been a common goal for academics and practitioners. Management scientists use different techniques and approaches to improve different types of decisions. The purpose of this decision is to use the available resources as well as possible (data and techniques) to achieve the objectives of the organization. The developing and applying of models and concepts may be helpful to solve managerial problems faced every day in different companies. As a result of this research different decision models are presented to contribute to the body of knowledge of management science. The first models are focused on the manufacturing industry and the second part of the models on the health care industry. Despite these models being case specific, they serve the purpose of exemplifying that different approaches to the problems and could provide interesting results. Unfortunately, there is no universal recipe that could be applied to all the problems. Furthermore, the same model could deliver good results with certain data and bad results for other data. A framework to analyse the data before selecting the model to be used is presented and tested in the models developed to exemplify the ideas. METHODOLOGY As the first step of the research a systematic literature review on the joint decision is presented, as are the different opinions and suggestions of different scholars. For the next stage of the thesis, the decision-making process of more than 50 companies was analysed in companies from different sectors in the production planning area at the Job Shop level. The data was obtained using surveys and face-to-face interviews. The following part of the research into the decision-making process was held in two application fields that are highly relevant for our society; manufacturing and health care. The first step was to study the interactions and develop a mathematical model for the replenishment of the car assembly where the problem of “Vehicle routing problem and Inventory” were combined. The next step was to add the scheduling or car production (car sequencing) decision and use some metaheuristics such as ant colony and genetic algorithms to measure if the behaviour is kept up with different case size problems. A similar approach is presented in a production of semiconductors and aviation parts, where a hoist has to change from one station to another to deal with the work, and a jobs schedule has to be done. However, for this problem simulation was used for experimentation. In parallel, the scheduling of operating rooms was studied. Surgeries were allocated to surgeons and the scheduling of operating rooms was analysed. The first part of the research was done in a Teaching hospital, and for the second part the interaction of uncertainty was added. Once the previous problem had been analysed a general framework to characterize the instance was built. In the final chapter a general conclusion is presented. FINDINGS AND PRACTICAL IMPLICATIONS The first part of the contributions is an update of the decision-making literature review. Also an analysis of the possible savings resulting from a change in the decision process is made. Then, the results of the survey, which present a lack of consistency between what the managers believe and the reality of the integration of their decisions. In the next stage of the thesis, a contribution to the body of knowledge of the operation research, with the joint solution of the replenishment, sequencing and inventory problem in the assembly line is made, together with a parallel work with the operating rooms scheduling where different solutions approaches are presented. In addition to the contribution of the solving methods, with the use of different techniques, the main contribution is the framework that is proposed to pre-evaluate the problem before thinking of the techniques to solve it. However, there is no straightforward answer as to whether it is better to have joint or sequential solutions. Following the proposed framework with the evaluation of factors such as the flexibility of the answer, the number of actors, and the tightness of the data, give us important hints as to the most suitable direction to take to tackle the problem. RESEARCH LIMITATIONS AND AVENUES FOR FUTURE RESEARCH In the first part of the work it was really complicated to calculate the possible savings of different projects, since in many papers these quantities are not reported or the impact is based on non-quantifiable benefits. The other issue is the confidentiality of many projects where the data cannot be presented. For the car assembly line problem more computational power would allow us to solve bigger instances. For the operation research problem there was a lack of historical data to perform a parallel analysis in the teaching hospital. In order to keep testing the decision framework it is necessary to keep applying more case studies in order to generalize the results and make them more evident and less ambiguous. The health care field offers great opportunities since despite the recent awareness of the need to improve the decision-making process there are many opportunities to improve. Another big difference with the automotive industry is that the last improvements are not spread among all the actors. Therefore, in the future this research will focus more on the collaboration between academia and the health care sector.