883 resultados para Mixed integer models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we study a model for HIV and TB coinfection. We consider the integer order and the fractional order versions of the model. Let α∈[0.78,1.0] be the order of the fractional derivative, then the integer order model is obtained for α=1.0. The model includes vertical transmission for HIV and treatment for both diseases. We compute the reproduction number of the integer order model and HIV and TB submodels, and the stability of the disease free equilibrium. We sketch the bifurcation diagrams of the integer order model, for variation of the average number of sexual partners per person and per unit time, and the tuberculosis transmission rate. We analyze numerical results of the fractional order model for different values of α, including α=1. The results show distinct types of transients, for variation of α. Moreover, we speculate, from observation of the numerical results, that the order of the fractional derivative may behave as a bifurcation parameter for the model. We conclude that the dynamics of the integer and the fractional order versions of the model are very rich and that together these versions may provide a better understanding of the dynamics of HIV and TB coinfection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This report describes the full research proposal for the project \Balancing and lot-sizing mixed-model lines in the footwear industry", to be developed as part of the master program in Engenharia Electrotécnica e de Computadores - Sistemas de Planeamento Industrial of the Instituto Superior de Engenharia do Porto. The Portuguese footwear industry is undergoing a period of great development and innovation. The numbers speak for themselves, Portugal footwear exported 71 million pairs of shoes to over 130 countries in 2012. It is a diverse sector, which covers different categories of women, men and children shoes, each of them with various models. New and technologically advanced mixed-model assembly lines are being projected and installed to replace traditional mass assembly lines. Obviously there is a need to manage them conveniently and to improve their operations. This work focuses on balancing and lot-sizing stitching mixed-model lines in a real world environment. For that purpose it will be fundamental to develop and evaluate adequate effective solution methods. Different objectives may be considered, which are relevant for the companies, such as minimizing the number of workstations, and minimizing the makespan, while taking into account a lot of practical restrictions. The solution approaches will be based on approximate methods, namely by resorting to metaheuristics. To show the impact of having different lots in production the initial maximum amount for each lot is changed and a Tabu Search based procedure is used to improve the solutions. The developed approaches will be evaluated and tested. A special attention will be given to the solution of real applied problems. Future work may include the study of other neighbourhood structures related to Tabu Search and the development of ways to speed up the evaluation of neighbours, as well as improving the balancing solution method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Doutor em Engenharia Química e Bioquímica

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Both culture coverage and digital journalism are contemporary phenomena that have undergone several transformations within a short period of time. Whenever the media enters a period of uncertainty such as the present one, there is an attempt to innovate in order to seek sustainability, skip the crisis or find a new public. This indicates that there are new trends to be understood and explored, i.e., how are media innovating in a digital environment? Not only does the professional debate about the future of journalism justify the need to explore the issue, but so do the academic approaches to cultural journalism. However, none of the studies so far have considered innovation as a motto or driver and tried to explain how the media are covering culture, achieving sustainability and engaging with the readers in a digital environment. This research examines how European media which specialize in culture or have an important cultural section are innovating in a digital environment. Specifically, we see how these innovation strategies are being taken in relation to the approach to culture and dominant cultural areas, editorial models, the use of digital tools for telling stories, overall brand positioning and extensions, engagement with the public and business models. We conducted a mixed methods study combining case studies of four media projects, which integrates qualitative web features and content analysis, with quantitative web content analysis. Two major general-interest journalistic brands which started as physical newspapers – The Guardian (London, UK) and Público (Lisbon, Portugal) – a magazine specialized in international affairs, culture and design – Monocle (London, UK) – and a native digital media project that was launched by a cultural organization – Notodo, by La Fábrica – were the four case studies chosen. Findings suggest, on one hand, that we are witnessing a paradigm shift in culture coverage in a digital environment, challenging traditional boundaries related to cultural themes and scope, angles, genres, content format and delivery, engagement and business models. Innovation in the four case studies lies especially along the product dimensions (format and content), brand positioning and process (business model and ways to engage with users). On the other hand, there are still perennial values that are crucial to innovation and sustainability, such as commitment to journalism, consistency (to the reader, to brand extensions and to the advertiser), intelligent differentiation and the capability of knowing what innovation means and how it can be applied, since this thesis also confirms that one formula doesn´t suit all. Changing minds, exceeding cultural inertia and optimizing the memory of the websites, looking at them as living, organic bodies, which continuously interact with the readers in many different ways, and not as a closed collection of articles, are still the main challenges for some media.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Even if a large proportion of physiotherapists work in the private sector worldwide, very little is known of the organizations within which they practice. Such knowledge is important to help understand contexts of practice and how they influence the quality of services and patient outcomes. The purpose of this study was to: 1) describe characteristics of organizations where physiotherapists practice in the private sector, and 2) explore the existence of a taxonomy of organizational models. METHODS: This was a cross-sectional quantitative survey of 236 randomly-selected physiotherapists. Participants completed a purpose-designed questionnaire online or by telephone, covering organizational vision, resources, structures and practices. Organizational characteristics were analyzed descriptively, while organizational models were identified by multiple correspondence analyses. RESULTS: Most organizations were for-profit (93.2%), located in urban areas (91.5%), and within buildings containing multiple businesses/organizations (76.7%). The majority included multiple providers (89.8%) from diverse professions, mainly physiotherapy assistants (68.7%), massage therapists (67.3%) and osteopaths (50.2%). Four organizational models were identified: 1) solo practice, 2) middle-scale multiprovider, 3) large-scale multiprovider and 4) mixed. CONCLUSIONS: The results of this study provide a detailed description of the organizations where physiotherapists practice, and highlight the importance of human resources in differentiating organizational models. Further research examining the influences of these organizational characteristics and models on outcomes such as physiotherapists' professional practices and patient outcomes are needed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Female crickets respond selectively to variations in species-specific male calling songs. This selectivity has been shown to be age-dependent; older females are less choosy. However, female quality should also affect female selectivity. The effect of female quality on mate choice was examined in Gryllus integer by comparing the phonotactic responses of females on different diets and with different parasite loads to various synthetic models of conspecific calling song. Test females were virgin, 11-14 days old, and had been maintained on one of five diets varying in protein and fat content. Phonotaxis was quantified using a non-compensating Kugel treadmill which generates vector scores incorporating the speed and direction of movement of each female. Test females were presented with four calling song models which differed in pulse rate, but were still within the natural range of the species for the experimental temperature. After testing, females were dissected and the number of gregarine parasites within the digestive tract counted. There were no significant effects of either diet or parasitism on female motivation to mate although the combined effects of these variables seem to have an effect with no apparent trend. Control females did not discriminate among song types, but there was a trend of female preferences for lower pulse rates which are closest to the mean pulse rate for the species. Heavily parasitized females did not discriminate among pulse rates altho~gh there was a similar trend of high vector scores for low pulse rates. Diet, however, affected selectivity with poorly-fed females showing significantly high vector scores for pulse rates near the species mean. Such findings raise interesting questions about energy allocation and costs and risks of phonotaxis and mate choice in acoustic Orthoptera. These results are discussed in terms of sexual selection and female mate choice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’observation d’un modèle pratiquant une habileté motrice promeut l’apprentissage de l’habileté en question. Toutefois, peu de chercheurs se sont attardés à étudier les caractéristiques d’un bon modèle et à mettre en évidence les conditions d’observation pouvant optimiser l’apprentissage. Dans les trois études composant cette thèse, nous avons examiné les effets du niveau d’habileté du modèle, de la latéralité du modèle, du point de vue auquel l’observateur est placé, et du mode de présentation de l’information sur l’apprentissage d’une tâche de timing séquentielle composée de quatre segments. Dans la première expérience de la première étude, les participants observaient soit un novice, soit un expert, soit un novice et un expert. Les résultats des tests de rétention et de transfert ont révélé que l’observation d’un novice était moins bénéfique pour l’apprentissage que le fait d’observer un expert ou une combinaison des deux (condition mixte). Par ailleurs, il semblerait que l’observation combinée de modèles novice et expert induise un mouvement plus stable et une meilleure généralisation du timing relatif imposé comparativement aux deux autres conditions. Dans la seconde expérience, nous voulions déterminer si un certain type de performance chez un novice (très variable, avec ou sans amélioration de la performance) dans l’observation d’une condition mixte amenait un meilleur apprentissage de la tâche. Aucune différence significative n’a été observée entre les différents types de modèle novices employés dans l’observation de la condition mixte. Ces résultats suggèrent qu’une observation mixte fournit une représentation précise de ce qu’il faut faire (modèle expert) et que l’apprentissage est d’autant plus amélioré lorsque l’apprenant peut contraster cela avec la performance de modèles ayant moins de succès. Dans notre seconde étude, des participants droitiers devaient observer un modèle à la première ou à la troisième personne. L’observation d’un modèle utilisant la même main préférentielle que soi induit un meilleur apprentissage de la tâche que l’observation d’un modèle dont la dominance latérale est opposée à la sienne, et ce, quel que soit l’angle d’observation. Ce résultat suggère que le réseau d’observation de l’action (AON) est plus sensible à la latéralité du modèle qu’à l’angle de vue de l’observateur. Ainsi, le réseau d’observation de l’action semble lié à des régions sensorimotrices du cerveau qui simulent la programmation motrice comme si le mouvement observé était réalisé par sa propre main dominante. Pour finir, dans la troisième étude, nous nous sommes intéressés à déterminer si le mode de présentation (en direct ou en vidéo) influait sur l’apprentissage par observation et si cet effet est modulé par le point de vue de l’observateur (première ou troisième personne). Pour cela, les participants observaient soit un modèle en direct soit une présentation vidéo du modèle et ceci avec une vue soit à la première soit à la troisième personne. Nos résultats ont révélé que l’observation ne diffère pas significativement selon le type de présentation utilisée ou le point de vue auquel l’observateur est placé. Ces résultats sont contraires aux prédictions découlant des études d’imagerie cérébrale ayant montré une activation plus importante du cortex sensorimoteur lors d’une observation en direct comparée à une observation vidéo et de la première personne comparée à la troisième personne. Dans l’ensemble, nos résultats indiquent que le niveau d’habileté du modèle et sa latéralité sont des déterminants importants de l’apprentissage par observation alors que le point de vue de l’observateur et le moyen de présentation n’ont pas d’effets significatifs sur l’apprentissage d’une tâche motrice. De plus, nos résultats suggèrent que la plus grande activation du réseau d’observation de l’action révélée par les études en imagerie mentale durant l’observation d’une action n’induit pas nécessairement un meilleur apprentissage de la tâche.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper studies oligopolistic competition in education markets when schools can be private and public and when the quality of education depends on ìpeer groupî e§ects. In the Örst stage of our game schools set their quality and in the second stage they Öx their tuition fees. We examine how the (subgame perfect Nash) equilibrium allocation (qualities, tuition fees and welfare) is a§ected by the presence of public schools and by their relative position in the quality range. When there are no peer group e§ects, e¢ ciency is achieved when (at least) all but one school are public. In particular in the two school case, the impact of a public school is spectacular as we go from a setting of extreme di§erentiation to an e¢ cient allocation. However, in the three school case, a single public school will lower welfare compared to the private equilibrium. We then introduce a peer group e§ect which, for any given school is determined by its student with the highest ability. These PGE do have a signiÖcant impact on the results. The mixed equilibrium is now never e¢ cient. However, welfare continues to be improved if all but one school are public. Overall, the presence of PGE reduces the e§ectiveness of public schools as regulatory tool in an otherwise private education sector.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Els estudis de supervivència s'interessen pel temps que passa des de l'inici de l'estudi (diagnòstic de la malaltia, inici del tractament,...) fins que es produeix l'esdeveniment d'interès (mort, curació, millora,...). No obstant això, moltes vegades aquest esdeveniment s'observa més d'una vegada en un mateix individu durant el període de seguiment (dades de supervivència multivariant). En aquest cas, és necessari utilitzar una metodologia diferent a la utilitzada en l'anàlisi de supervivència estàndard. El principal problema que l'estudi d'aquest tipus de dades comporta és que les observacions poden no ser independents. Fins ara, aquest problema s'ha solucionat de dues maneres diferents en funció de la variable dependent. Si aquesta variable segueix una distribució de la família exponencial s'utilitzen els models lineals generalitzats mixtes (GLMM); i si aquesta variable és el temps, variable amb una distribució de probabilitat no pertanyent a aquesta família, s'utilitza l'anàlisi de supervivència multivariant. El que es pretén en aquesta tesis és unificar aquests dos enfocs, és a dir, utilitzar una variable dependent que sigui el temps amb agrupacions d'individus o d'observacions, a partir d'un GLMM, amb la finalitat d'introduir nous mètodes pel tractament d'aquest tipus de dades.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La majoria de les fallades en elements estructurals són degudes a càrrega per fatiga. En conseqüència, la fatiga mecànica és un factor clau per al disseny d'elements mecànics. En el cas de materials compòsits laminats, el procés de fallada per fatiga inclou diferents mecanismes de dany que resulten en la degradació del material. Un dels mecanismes de dany més importants és la delaminació entre capes del laminat. En el cas de components aeronàutics, les plaques de composit estan exposades a impactes i les delaminacions apareixen facilment en un laminat després d'un impacte. Molts components fets de compòsit tenen formes corbes, superposició de capes i capes amb diferents orientacions que fan que la delaminació es propagui en un mode mixt que depen de la grandària de la delaminació. És a dir, les delaminacions generalment es propaguen en mode mixt variable. És per això que és important desenvolupar nous mètodes per caracteritzar el creixement subcrític en mode mixt per fatiga de les delaminacions. El principal objectiu d'aquest treball és la caracterització del creixement en mode mixt variable de les delaminacions en compòsits laminats per efecte de càrregues a fatiga. Amb aquest fi, es proposa un nou model per al creixement per fatiga de la delaminació en mode mixt. Contràriament als models ja existents, el model que es proposa es formula d'acord a la variació no-monotònica dels paràmetres de propagació amb el mode mixt observada en diferents resultats experimentals. A més, es du a terme un anàlisi de l'assaig mixed-mode end load split (MMELS), la característica més important del qual és la variació del mode mixt a mesura que la delaminació creix. Per a aquest anàlisi, es tenen em compte dos mètodes teòrics presents en la literatura. No obstant, les expressions resultants per l'assaig MMELS no són equivalents i les diferències entre els dos mètodes poden ser importants, fins a 50 vegades. Per aquest motiu, en aquest treball es porta a terme un anàlisi alternatiu més acurat del MMELS per tal d'establir una comparació. Aquest anàlisi alternatiu es basa en el mètode dels elements finits i virtual crack closure technique (VCCT). D'aquest anàlisi en resulten importants aspectes a considerar per a la bona caracterització de materials utilitzant l'assaig MMELS. Durant l'estudi s'ha dissenyat i construït un utillatge per l'assaig MMELS. Per a la caracterització experimental de la propagació per fatiga de delaminacions en mode mixt variable s'utilitzen diferents provetes de laminats carboni/epoxy essencialment unidireccionals. També es du a terme un anàlisi fractogràfic d'algunes de les superfícies de fractura per delaminació. Els resultats experimentals són comparats amb les prediccions del model proposat per la propagació per fatiga d'esquerdes interlaminars.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using the Met Office large-eddy model (LEM) we simulate a mixed-phase altocumulus cloud that was observed from Chilbolton in southern England by a 94 GHz Doppler radar, a 905 nm lidar, a dual-wavelength microwave radiometer and also by four radiosondes. It is important to test and evaluate such simulations with observations, since there are significant differences between results from different cloud-resolving models for ice clouds. Simulating the Doppler radar and lidar data within the LEM allows us to compare observed and modelled quantities directly, and allows us to explore the relationships between observed and unobserved variables. For general-circulation models, which currently tend to give poor representations of mixed-phase clouds, the case shows the importance of using: (i) separate prognostic ice and liquid water, (ii) a vertical resolution that captures the thin layers of liquid water, and (iii) an accurate representation the subgrid vertical velocities that allow liquid water to form. It is shown that large-scale ascents and descents are significant for this case, and so the horizontally averaged LEM profiles are relaxed towards observed profiles to account for these. The LEM simulation then gives a reasonable. cloud, with an ice-water path approximately two thirds of that observed, with liquid water at the cloud top, as observed. However, the liquid-water cells that form in the updraughts at cloud top in the LEM have liquid-water paths (LWPs) up to half those observed, and there are too few cells, giving a mean LWP five to ten times smaller than observed. In reality, ice nucleation and fallout may deplete ice-nuclei concentrations at the cloud top, allowing more liquid water to form there, but this process is not represented in the model. Decreasing the heterogeneous nucleation rate in the LEM increased the LWP, which supports this hypothesis. The LEM captures the increase in the standard deviation in Doppler velocities (and so vertical winds) with height, but values are 1.5 to 4 times smaller than observed (although values are larger in an unforced model run, this only increases the modelled LWP by a factor of approximately two). The LEM data show that, for values larger than approximately 12 cm s(-1), the standard deviation in Doppler velocities provides an almost unbiased estimate of the standard deviation in vertical winds, but provides an overestimate for smaller values. Time-smoothing the observed Doppler velocities and modelled mass-squared-weighted fallspeeds shows that observed fallspeeds are approximately two-thirds of the modelled values. Decreasing the modelled fallspeeds to those observed increases the modelled IWC, giving an IWP 1.6 times that observed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The uptake and storage of anthropogenic carbon in the North Atlantic is investigated using different configurations of ocean general circulation/carbon cycle models. We investigate how different representations of the ocean physics in the models, which represent the range of models currently in use, affect the evolution of CO2 uptake in the North Atlantic. The buffer effect of the ocean carbon system would be expected to reduce ocean CO2 uptake as the ocean absorbs increasing amounts of CO2. We find that the strength of the buffer effect is very dependent on the model ocean state, as it affects both the magnitude and timing of the changes in uptake. The timescale over which uptake of CO2 in the North Atlantic drops to below preindustrial levels is particularly sensitive to the ocean state which sets the degree of buffering; it is less sensitive to the choice of atmospheric CO2 forcing scenario. Neglecting physical climate change effects, North Atlantic CO2 uptake drops below preindustrial levels between 50 and 300 years after stabilisation of atmospheric CO2 in different model configurations. Storage of anthropogenic carbon in the North Atlantic varies much less among the different model configurations, as differences in ocean transport of dissolved inorganic carbon and uptake of CO2 compensate each other. This supports the idea that measured inventories of anthropogenic carbon in the real ocean cannot be used to constrain the surface uptake. Including physical climate change effects reduces anthropogenic CO2 uptake and storage in the North Atlantic further, due to the combined effects of surface warming, increased freshwater input, and a slowdown of the meridional overturning circulation. The timescale over which North Atlantic CO2 uptake drops to below preindustrial levels is reduced by about one-third, leading to an estimate of this timescale for the real world of about 50 years after the stabilisation of atmospheric CO2. In the climate change experiment, a shallowing of the mixed layer depths in the North Atlantic results in a significant reduction in primary production, reducing the potential role for biology in drawing down anthropogenic CO2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the mixed logit (ML) using Bayesian methods was employed to examine willingness-to-pay (WTP) to consume bread produced with reduced levels of pesticides so as to ameliorate environmental quality, from data generated by a choice experiment. Model comparison used the marginal likelihood, which is preferable for Bayesian model comparison and testing. Models containing constant and random parameters for a number of distributions were considered, along with models in ‘preference space’ and ‘WTP space’ as well as those allowing for misreporting. We found: strong support for the ML estimated in WTP space; little support for fixing the price coefficient a common practice advocated and adopted in the environmental economics literature; and, weak evidence for misreporting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three simple climate models (SCMs) are calibrated using simulations from atmosphere ocean general circulation models (AOGCMs). In addition to using two conventional SCMs, results from a third simpler model developed specifically for this study are obtained. An easy to implement and comprehensive iterative procedure is applied that optimises the SCM emulation of global-mean surface temperature and total ocean heat content, and, if available in the SCM, of surface temperature over land, over the ocean and in both hemispheres, and of the global-mean ocean temperature profile. The method gives best-fit estimates as well as uncertainty intervals for the different SCM parameters. For the calibration, AOGCM simulations with two different types of forcing scenarios are used: pulse forcing simulations performed with 2 AOGCMs and gradually changing forcing simulations from 15 AOGCMs obtained within the framework of the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. The method is found to work well. For all possible combinations of SCMs and AOGCMs the emulation of AOGCM results could be improved. The obtained SCM parameters depend both on the AOGCM data and the type of forcing scenario. SCMs with a poor representation of the atmosphere thermal inertia are better able to emulate AOGCM results from gradually changing forcing than from pulse forcing simulations. Correct simultaneous emulation of both atmospheric temperatures and the ocean temperature profile by the SCMs strongly depends on the representation of the temperature gradient between the atmosphere and the mixed layer. Introducing climate sensitivities that are dependent on the forcing mechanism in the SCMs allows the emulation of AOGCM responses to carbon dioxide and solar insolation forcings equally well. Also, some SCM parameters are found to be very insensitive to the fitting, and the reduction of their uncertainty through the fitting procedure is only marginal, while other parameters change considerably. The very simple SCM is found to reproduce the AOGCM results as well as the other two comparably more sophisticated SCMs.