917 resultados para MODEL SEARCH
Resumo:
Automated examination timetabling has been addressed by a wide variety of methodologies and techniques over the last ten years or so. Many of the methods in this broad range of approaches have been evaluated on a collection of benchmark instances provided at the University of Toronto in 1996. Whilst the existence of these datasets has provided an invaluable resource for research into examination timetabling, the instances have significant limitations in terms of their relevance to real-world examination timetabling in modern universities. This paper presents a detailed model which draws upon experiences of implementing examination timetabling systems in universities in Europe, Australasia and America. This model represents the problem that was presented in the 2nd International Timetabling Competition (ITC2007). In presenting this detailed new model, this paper describes the examination timetabling track introduced as part of the competition. In addition to the model, the datasets used in the competition are also based on current real-world instances introduced by EventMAP Limited. It is hoped that the interest generated as part of the competition will lead to the development, investigation and application of a host of novel and exciting techniques to address this important real-world search domain. Moreover, the motivating goal of this paper is to close the currently existing gap between theory and practice in examination timetabling by presenting the research community with a rigorous model which represents the complexity of the real-world situation. In this paper we describe the model and its motivations, followed by a full formal definition.
Resumo:
Images of the site of the Type Ic supernova (SN) 2002ap taken before explosion were analysed previously by Smartt et al. We have uncovered new unpublished, archival pre-explosion images from the Canada-France-Hawaii Telescope (CFHT) that are vastly superior in depth and image quality. In this paper we present a further search for the progenitor star of this unusual Type Ic SN. Aligning high-resolution Hubble Space Telescope observations of the SN itself with the archival CFHT images allowed us to pinpoint the location of the progenitor site on the groundbased observations. We find that a source visible in the B- and R-band pre-explosion images close to the position of the SN is (1) not coincident with the SN position within the uncertainties of our relative astrometry and (2) is still visible similar to 4.7-yr post-explosion in late-time observations taken with the William Herschel Telescope. We therefore conclude that it is not the progenitor of SN 2002ap. We derived absolute limiting magnitudes for the progenitor of M-B >= -4.2 +/- 0.5 and M-R >= -5.1 +/- 0.5. These are the deepest limits yet placed on a Type Ic SN progenitor. We rule out all massive stars with initial masses greater than 7-8 M-circle dot (the lower mass limit for stars to undergo core collapse) that have not evolved to become Wolf-Rayet stars. This is consistent with the prediction that Type Ic SNe should result from the explosions of Wolf-Rayet stars. Comparing our luminosity limits with stellar models of single stars at appropriate metallicity (Z = 0.008) and with standard mass-loss rates, we find no model that produces a Wolf-Rayet star of low enough mass and luminosity to be classed as a viable progenitor. Models with twice the standard mass-loss rates provide possible single star progenitors but all are initially more massive than 30-40 M-circle dot. We conclude that any single star progenitor must have experienced at least twice the standard mass-loss rates, been initially more massive than 30-40 M-circle dot and exploded as a Wolf-Rayet star of final mass 10-12 M-circle dot. Alternatively a progenitor star of lower initial mass may have evolved in an interacting binary system. Mazzali et al. propose such a binary scenario for the progenitor of SN 2002ap in which a star of initial mass 15-20 M-circle dot is stripped by its binary companion, becoming a 5 M-circle dot Wolf-Rayet star prior to explosion. We constrain any possible binary companion to a main-sequence star of
Resumo:
Pan-resistant Acinetobacter baumannii have prompted the search for therapeutic alternatives. We evaluate the efficacy of four cecropin A-melittin hybrid peptides (CA-M) in vivo. Toxicity was determined in mouse erythrocytes and in mice (lethal dose parameters were LD(0), LD(50), LD(100)). Protective dose 50 (PD(50)) was determined by inoculating groups of ten mice with the minimal lethal dose of A. baumannii (BMLD) and treating with doses of each CA-M from 0.5 mg/kg to LD(0). The activity of CA-Ms against A. baumannii was assessed in a peritoneal sepsis model. Mice were sacrificed at 0 and 1, 3, 5, and 7-h post-treatment. Spleen and peritoneal fluid bacterial concentrations were measured. CA(1-8)M(1-18) was the less haemolytic on mouse erythrocytes. LD(0) (mg/kg) was 32 for CA(1-8)M(1-18), CA(1-7)M(2-9), and Oct-CA(1-7)M(2-9), and 16 for CA(1-7)M(5-9). PD(50) was not achieved with non-toxic doses (= LD(0)). In the sepsis model, all CA-Ms were bacteriostatic in spleen, and decreased bacterial concentration (p <0.05) in peritoneal fluid, at 1-h post-treatment; at later times, bacterial regrowth was observed in peritoneal fluid. CA-Ms showed local short-term efficacy in the peritoneal sepsis model caused by pan-resistant Acinetobacter baumannii.
Resumo:
There has been a long-standing discussion in the literature as to whether core accretion or disk instability is the dominant mode of planet formation. Over the last decade, several lines of evidence have been presented showing that core accretion is most likely the dominant mechanism for the close-in population of planets probed by radial velocity and transits. However, this does not by itself prove that core accretion is the dominant mode for the total planet population, since disk instability might conceivably produce and retain large numbers of planets in the far-out regions of the disk. If this is a relevant scenario, then the outer massive disks of B-stars should be among the best places for massive planets and brown dwarfs to form and reside. In this study, we present high-contrast imaging of 18 nearby massive stars of which 15 are in the B2-A0 spectral-type range and provide excellent sensitivity to wide companions. By comparing our sensitivities to model predictions of disk instability based on physical criteria for fragmentation and cooling, and using Monte Carlo simulations for orbital distributions, we find that ~85% of such companions should have been detected in our images on average. Given this high degree of completeness, stringent statistical limits can be set from the null-detection result, even with the limited sample size. We find that
Resumo:
For the actual existence of e-government it is necessary and crucial to provide public information and documentation, making its access simple to citizens. A portion, not necessarily small, of these documents is in an unstructured form and in natural language, and consequently outside of which the current search systems are generally able to cope and effectively handle. Thus, in thesis, it is possible to improve access to these contents using systems that process natural language and create structured information, particularly if supported in semantics. In order to put this thesis to test, this work was developed in three major phases: (1) design of a conceptual model integrating the creation of structured information and making it available to various actors, in line with the vision of e-government 2.0; (2) definition and development of a prototype instantiating the key modules of this conceptual model, including ontology based information extraction supported by examples of relevant information, knowledge management and access based on natural language; (3) assessment of the usability and acceptability of querying information as made possible by the prototype - and in consequence of the conceptual model - by users in a realistic scenario, that included comparison with existing forms of access. In addition to this evaluation, at another level more related to technology assessment and not to the model, evaluations were made on the performance of the subsystem responsible for information extraction. The evaluation results show that the proposed model was perceived as more effective and useful than the alternatives. Associated with the performance of the prototype to extract information from documents, comparable to the state of the art, results demonstrate the feasibility and advantages, with current technology, of using natural language processing and integration of semantic information to improve access to unstructured contents in natural language. The conceptual model and the prototype demonstrator intend to contribute to the future existence of more sophisticated search systems that are also more suitable for e-government. To have transparency in governance, active citizenship, greater agility in the interaction with the public administration, among others, it is necessary that citizens and businesses have quick and easy access to official information, even if it was originally created in natural language.
Resumo:
Tese de doutoramento, Informática (Engenharia Informática), Universidade de Lisboa, Faculdade de Ciências, 2014
Resumo:
This report describes the full research proposal for the project \Balancing and lot-sizing mixed-model lines in the footwear industry", to be developed as part of the master program in Engenharia Electrotécnica e de Computadores - Sistemas de Planeamento Industrial of the Instituto Superior de Engenharia do Porto. The Portuguese footwear industry is undergoing a period of great development and innovation. The numbers speak for themselves, Portugal footwear exported 71 million pairs of shoes to over 130 countries in 2012. It is a diverse sector, which covers different categories of women, men and children shoes, each of them with various models. New and technologically advanced mixed-model assembly lines are being projected and installed to replace traditional mass assembly lines. Obviously there is a need to manage them conveniently and to improve their operations. This work focuses on balancing and lot-sizing stitching mixed-model lines in a real world environment. For that purpose it will be fundamental to develop and evaluate adequate effective solution methods. Different objectives may be considered, which are relevant for the companies, such as minimizing the number of workstations, and minimizing the makespan, while taking into account a lot of practical restrictions. The solution approaches will be based on approximate methods, namely by resorting to metaheuristics. To show the impact of having different lots in production the initial maximum amount for each lot is changed and a Tabu Search based procedure is used to improve the solutions. The developed approaches will be evaluated and tested. A special attention will be given to the solution of real applied problems. Future work may include the study of other neighbourhood structures related to Tabu Search and the development of ways to speed up the evaluation of neighbours, as well as improving the balancing solution method.
Resumo:
This paper studies the effects of monetary policy on mutual fund risk taking using a sample of Portuguese fixed-income mutual funds in the 2000-2012 period. Firstly I estimate time-varying measures of risk exposure (betas) for the individual funds, for the benchmark portfolio, as well as for a representative equally-weighted portfolio, through 24-month rolling regressions of a two-factor model with two systematic risk factors: interest rate risk (TERM) and default risk (DEF). Next, in the second phase, using the estimated betas, I try to understand what portion of the risk exposure is in excess of the benchmark (active risk) and how it relates to monetary policy proxies (one-month rate, Taylor residual, real rate and first principal component of a cross-section of government yields and rates). Using this methodology, I provide empirical evidence that Portuguese fixed-income mutual funds respond to accommodative monetary policy by significantly increasing exposure, in excess of their benchmarks, to default risk rate and slightly to interest risk rate as well. I also find that the increase in funds’ risk exposure to gain a boost in return (search-for-yield) is more pronounced following the 2007-2009 global financial crisis, indicating that the current historic low interest rates may incentivize excessive risk taking. My results suggest that monetary policy affects the risk appetite of non-bank financial intermediaries.
Resumo:
Despite the rapid change in today's business environment there are relatively few studies about corporate renewal. This study aims for its part at filling that research gap by studying the concepts of strategy, corporate renewal, innovation and corporate venturing. Its purpose is to enhance our understanding of how established companies operating in dynamic and global environment can benefit from their corporate venturing activities. The theoretical part approaches the research problem in corporate and venture levels. Firstly, it focuses on mapping the determinants of strategy and suggests using industry, location, resources, knowledge, structure and culture, market, technology and business model to assess the environment and using these determinants to optimize speed and magnitude of change.Secondly, it concludes that the choice of innovation strategy is dependent on the type and dimensions of innovation and suggests assessing market, technology, business model as well as novelty and complexity related to each of them for choosing an optimal context for developing innovations further. Thirdly, it directsattention on processes through which corporate renewal takes place. On corporate level these processes are identified as strategy formulation, strategy formation and strategy implementation. On the venture level the renewal processes are identified as learning, leveraging and nesting. The theoretical contribution of this study, the framework of strategic corporate venturing, joins corporate and venture level management issues together and concludes that strategy processes and linking processes are the mechanism through which continuous corporate renewaltakes place. The framework of strategic corporate venturing proposed by this study is a new way to illustrate the role of corporate venturing as a purposefullybuilt, different view of a company's business environment. The empirical part extended the framework by enhancing our understanding of the link between corporate renewal and corporate venturing in its real life environment in three Finnish companies: Metso, Nokia and TeliaSonera. Characterizing companies' environmentwith the determinants of strategy identified in this study provided a structured way to analyze their competitive position and renewal challenges that they arefacing. More importantly the case studies confirmed that a link between corporate renewal and corporate venturing exists and found out that the link is not as straight forward as indicated by the theory. Furthermore, the case studies enhanced the framework by indicating a sequence according to which the processes work. Firstly, the induced strategy processes strategy formulation and strategy implementation set the scene for corporate venturing context and management processes and leave strategy formation for the venture. Only after that can strategies formed by ventures come back to the corporate level - and if found viable in the corporate level be formalized through formulation and implementation. With the help of the framework of strategic corporate venturing the link between corporaterenewal and corporate venturing can be found and managed. The suggested response to the continuous need for change is continuous renewal i.e. institutionalizing corporate renewal in the strategy processes of the company. As far as benefiting from venturing is concerned the answer lies in deliberately managing venturing in a context different to the mainstream businesses and establishing efficientlinking processes to exploit the renewal potential of individual ventures.
Resumo:
Responding to a series of articles in sport management literature calling for more diversity in terms of areas of interest or methods, this study warns against the danger of excessively fragmenting this field of research. The works of Kuhn (1962) and Pfeffer (1993) are taken as the basis of an argument that connects convergence with scientific strength. However, being aware of the large number of counterarguments directed at this line of reasoning, a new model of convergence, which focuses on clusters of research contributions with similar areas of interest, methods, and concepts, is proposed. The existence of these clusters is determined with the help of a bibliometric analysis of publications in three sport management journals. This examination determines that there are justified reasons to be concerned about the level of convergence in the field, pointing out to a reduced ability to create large clusters of contributions in similar areas of interest.
Resumo:
La recherche de nouvelles voies de correction de la scoliose idiopathique a une longue histoire. Le traitement conventionnel de la scoliose idiopathique est présenté par le port du corset ou par la correction opératoire de la déformation. Depuis leur introduction, les deux méthodes ont prouvé leur efficacité. Cependant, malgré des caractéristiques positives évidentes, ces méthodes peuvent causer un nombre important d'effets indésirables sur la santé du patient. Les techniques sans fusion pour le traitement de la scoliose semblent être une alternative perspective de traitement traditionnel, car ils apportent moins de risques et des complications chirurgicales que les méthodes conventionnelles avec la conservation de la mobilité du disque intravertébral. Cependant, l'utilisation de techniques mentionnées exige une connaissance profonde de la modulation de croissance vertébrale. L'objectif principal de la présente étude est d'estimer le potentiel d'agrafes à l’AMF de moduler la croissance des vertèbres porcines en mesurant la croissance osseuse sur la plaque de croissance de vertèbres instrumentées en comparaison avec le groupe contrôle. La méthode est basée sur la loi de Hueter-Volkmann. Nous avons choisi NiTi agrafes à l’AMF pour notre étude et les porcs de race Landrace comme un animal expérimental. Les agrafes ont été insérés sur 5 niveaux thoracique de T6 à T11. En outre, les radiographies ont été prises toutes les 2 semaines. La présence d'agrafes en alliage à mémoire de forme a produit la création de courbes scoliotiques significatives dans 4 de 6 animaux chargés et le ralentissement considérable de la croissance osseuse (jusqu'à 35,4%) comparativement aux groupes contrôle et sham. L'étude a démontré in vivo le potentiel d'agrafes en alliage à mémoire de formes de moduler la croissance des vertèbres en créant des courbes scoliotiques sur les radiographies et en ralentissant le taux de croissance sur les plaques de croissance instrumenté. La position précise de l'agrafe est essentielle pour la modulation de croissance osseuse et le développement de la scoliose expérimentale.
Resumo:
Utilisant les plus récentes données recueillies par le détecteur ATLAS lors de collisions pp à 7 et 8 TeV au LHC, cette thèse établira des contraintes sévères sur une multitude de modèles allant au-delà du modèle standard (MS) de la physique des particules. Plus particulièrement, deux types de particules hypothétiques, existant dans divers modèles théoriques et qui ne sont pas présentes dans le MS, seront étudiés et sondés. Le premier type étudié sera les quarks-vectoriels (QV) produits lors de collisions pp par l’entremise de couplages électrofaibles avec les quarks légers u et d. On recherchera ces QV lorsqu’ils se désintègrent en un boson W ou Z, et un quark léger. Des arguments théoriques établissent que sous certaines conditions raisonnables la production simple dominerait la production en paires des QV. La topologie particulière des évènements en production simple des QV permettra alors la mise en oeuvre de techniques d’optimisation efficaces pour leur extraction des bruits de fond électrofaibles. Le deuxième type de particules recherché sera celles qui se désintègrent en WZ lorsque ces bosons de jauges W, et Z se désintègrent leptoniquement. Les états finaux détectés par ATLAS seront par conséquent des évènements ayant trois leptons et de l’énergie transverse manquante. La distribution de la masse invariante de ces objets sera alors examinée pour déterminer la présence ou non de nouvelles résonances qui se manifesterait par un excès localisé. Malgré le fait qu’à première vue ces deux nouveaux types de particules n’ont que très peu en commun, ils ont en réalité tous deux un lien étroit avec la brisure de symétrie électrofaible. Dans plusieurs modèles théoriques, l’existence hypothétique des QV est proposé pour annuler les contributions du quark top aux corrections radiatives de la masse du Higgs du MS. Parallèlement, d’autres modèles prédisent quant à eux des résonances en WZ tout en suggérant que le Higgs est une particule composite, chambardant ainsi tout le sector Higgs du MS. Ainsi, les deux analyses présentées dans cette thèse ont un lien fondamental avec la nature même du Higgs, élargissant par le fait même nos connaissances sur l’origine de la masse intrinsèque des particules. En fin de compte, les deux analyses n’ont pas observé d’excès significatif dans leurs régions de signal respectives, ce qui permet d’établir des limites sur la section efficace de production en fonction de la masse des résonances.
Resumo:
Cette thèse a pour but d’améliorer l’automatisation dans l’ingénierie dirigée par les modèles (MDE pour Model Driven Engineering). MDE est un paradigme qui promet de réduire la complexité du logiciel par l’utilisation intensive de modèles et des transformations automatiques entre modèles (TM). D’une façon simplifiée, dans la vision du MDE, les spécialistes utilisent plusieurs modèles pour représenter un logiciel, et ils produisent le code source en transformant automatiquement ces modèles. Conséquemment, l’automatisation est un facteur clé et un principe fondateur de MDE. En plus des TM, d’autres activités ont besoin d’automatisation, e.g. la définition des langages de modélisation et la migration de logiciels. Dans ce contexte, la contribution principale de cette thèse est de proposer une approche générale pour améliorer l’automatisation du MDE. Notre approche est basée sur la recherche méta-heuristique guidée par les exemples. Nous appliquons cette approche sur deux problèmes importants de MDE, (1) la transformation des modèles et (2) la définition précise de langages de modélisation. Pour le premier problème, nous distinguons entre la transformation dans le contexte de la migration et les transformations générales entre modèles. Dans le cas de la migration, nous proposons une méthode de regroupement logiciel (Software Clustering) basée sur une méta-heuristique guidée par des exemples de regroupement. De la même façon, pour les transformations générales, nous apprenons des transformations entre modèles en utilisant un algorithme de programmation génétique qui s’inspire des exemples des transformations passées. Pour la définition précise de langages de modélisation, nous proposons une méthode basée sur une recherche méta-heuristique, qui dérive des règles de bonne formation pour les méta-modèles, avec l’objectif de bien discriminer entre modèles valides et invalides. Les études empiriques que nous avons menées, montrent que les approches proposées obtiennent des bons résultats tant quantitatifs que qualitatifs. Ceux-ci nous permettent de conclure que l’amélioration de l’automatisation du MDE en utilisant des méthodes de recherche méta-heuristique et des exemples peut contribuer à l’adoption plus large de MDE dans l’industrie à là venir.
Resumo:
Emma Hamilton (1765-1815) eut un impact considérable à un moment charnière de l’histoire et de l’art européens. Faisant preuve d’une énorme résilience, elle trouva un moyen efficace d’affirmer son agentivité et fut une source d’inspiration puissante pour des générations de femmes et d’artistes dans leur propre quête d’expression et de réalisation de soi. Cette thèse démontre qu’Emma tira sa puissance particulière de sa capacité à négocier des identités différentes et parfois même contradictoires – objet et sujet ; modèle et portraiturée ; artiste, muse et œuvre d’art ; épouse, maîtresse et prostituée ; roturière et aristocrate ; mondaine et ambassadrice : et interprète d’une myriade de caractères historiques, bibliques, littéraires et mythologiques, tant masculins que féminins. Épouse de l’ambassadeur anglais à Naples, favorite de la reine de Naples et amante de l’amiral Horatio Nelson, elle fut un agent sur la scène politique pendant l’époque révolutionnaire et napoléonienne. Dans son ascension sociale vertigineuse qui la mena de la plus abjecte misère aux plus hauts échelons de l’aristocratie anglaise, elle sut s’adapter, s’ajuster et se réinventer. Elle reçut et divertit d’innombrables écrivains, artistes, scientifiques, nobles, diplomates et membres de la royauté. Elle participa au développement et à la dissémination du néoclassicisme au moment même de son efflorescence. Elle créa ses Attitudes, une performance répondant au goût de son époque pour le classicisme, qui fut admirée et imitée à travers l’Europe et qui inspira des générations d’interprètes féminines. Elle apprit à danser la tarentelle et l’introduisit dans les salons aristocratiques. Elle influença un réseau de femmes s’étendant de Paris à Saint-Pétersbourg et incluant Élisabeth Vigée-Le Brun, Germaine de Staël et Juliette Récamier. Modèle hors pair, elle inspira plusieurs artistes pour la production d’œuvres qu’ils reconnurent comme parmi leurs meilleures. Elle fut représentée par les plus grands artistes de son temps, dont Angelica Kauffman, Benjamin West, Élisabeth Vigée-Le Brun, George Romney, James Gillray, Joseph Nollekens, Joshua Reynolds, Thomas Lawrence et Thomas Rowlandson. Elle bouscula, de façon répétée, les limites et mœurs sociales. Néanmoins, Emma ne tentait pas de présenter une identité cohérente, unifiée, polie. Au contraire, elle était un kaléidoscope de multiples « sois » qu’elle gardait actifs et en dialogue les uns avec les autres, réarrangeant continuellement ses facettes afin de pouvoir simultanément s’exprimer pleinement et présenter aux autres ce qu’ils voulaient voir.
Resumo:
Queueing system in which arriving customers who find all servers and waiting positions (if any) occupied many retry for service after a period of time are retrial queues or queues with repeated attempts. This study deals with two objectives one is to introduce orbital search in retrial queueing models which allows to minimize the idle time of the server. If the holding costs and cost of using the search of customers will be introduced, the results we obtained can be used for the optimal tuning of the parameters of the search mechanism. The second one is to provide insight of the link between the corresponding retrial queue and the classical queue. At the end we observe that when the search probability Pj = 1 for all j, the model reduces to the classical queue and when Pj = 0 for all j, the model becomes the retrial queue. It discusses the performance evaluation of single-server retrial queue. It was determined by using Poisson process. Then it discuss the structure of the busy period and its analysis interms of Laplace transforms and also provides a direct method of evaluation for the first and second moments of the busy period. Then it discusses the M/ PH/1 retrial queue with disaster to the unit in service and orbital search, and a multi-server retrial queueing model (MAP/M/c) with search of customers from the orbit. MAP is convenient tool to model both renewal and non-renewal arrivals. Finally the present model deals with back and forth movement between classical queue and retrial queue. In this model when orbit size increases, retrial rate also correspondingly increases thereby reducing the idle time of the server between services