977 resultados para Search problems
Resumo:
Tutkimuksen tavoitteena on selvittää paikallisten metsäteollisuuteen toimittavien pienten ja keskisuurten ohjelmistoyritysten nykytilannetta, hyviä käytäntöjä ja toiminnan ongelmia. Tunnistamalla pienten ja keskisuurten ohjelmistoyritysten nykytilanne on mahdollista suunnitella konkreettisia yrityksille suunnattavia kehittämistoimenpiteitä. Työssä on keskitytty tutkimaan metsäteollisuuteen toimittavia kaakkoissuomaisia pieniä ja keskisuuria ohjelmistoyrityksiä. Työn viitekehyksenä esitellään metsäteollisuuden nykytilannetta, kehitystrendejä sekä kehitystrendien vaikutusta metsäteollisuusyritysten tietojärjestelmätarpeisiin. Lisäksi työn viitekehyksessä esitellään tutkimuksessa käytetyt kvalitatiiviset tutkimusmenetelmät.Tutkimus on luonteeltaan kvalitatiivinen eli laadullinen ja tutkimusotteeltaan deskriptiivinen eli kuvaileva. Tutkimusaineisto koostui 19 asiantuntijahaastattelusta ja dokumenttiaineistosta. Tutkimusaineiston analysoinnissa käytin aineistopohjaista meneelmää ja tapaustutkimusta. Tutkimustulosten perusteella pystyttiinkuvaamaan tutkimuksessa mukana olleille 10 paikalliselle pienelle ja keskisuurelle ohjelmistoyritykselle yhteisiä ominaisuuksia. Tämän lisäksi pystyttiin tunnistamaan ohjelmistoyrityksistä kolme erilaista tyyppiä ja luonnehtimaan kunkin tyypin nykytilannetta, hyviä käytäntöjä ja toiminnan ongelmia. Tunnistetut paikallisten pienten ja keskisuurten ohjelmistoyritysten tyypit ovat: erikoistuja, ennakoija ja keräilijä. Tutkimustulokset antavat hyvän lähtökohdan tulevaisuuden kehitystrendien tunnistamisessa ja toiminnankehitystoimenpiteiden suunnittelussa.
Resumo:
While equal political representation of all citizens is a fundamental democratic goal, it is hampered empirically in a multitude of ways. This study examines how the societal level of economic inequality affects the representation of relatively poor citizens by parties and governments. Using CSES survey data for citizens' policy preferences and expert placements of political parties, empirical evidence is found that in economically more unequal societies, the party system represents the preferences of relatively poor citizens worse than in more equal societies. This moderating effect of economic equality is also found for policy congruence between citizens and governments, albeit slightly less clear-cut.
Resumo:
Purpose The purpose of our multidisciplinary study was to define a pragmatic and secure alternative to the creation of a national centralised medical record which could gather together the different parts of the medical record of a patient scattered in the different hospitals where he was hospitalised without any risk of breaching confidentiality. Methods We first analyse the reasons for the failure and the dangers of centralisation (i.e. difficulty to define a European patients' identifier, to reach a common standard for the contents of the medical record, for data protection) and then propose an alternative that uses the existing available data on the basis that setting up a safe though imperfect system could be better than continuing a quest for a mythical perfect information system that we have still not found after a search that has lasted two decades. Results We describe the functioning of Medical Record Search Engines (MRSEs), using pseudonymisation of patients' identity. The MRSE will be able to retrieve and to provide upon an MD's request all the available information concerning a patient who has been hospitalised in different hospitals without ever having access to the patient's identity. The drawback of this system is that the medical practitioner then has to read all of the information and to create his own synthesis and eventually to reject extra data. Conclusions Faced with the difficulties and the risks of setting up a centralised medical record system, a system that gathers all of the available information concerning a patient could be of great interest. This low-cost pragmatic alternative which could be developed quickly should be taken into consideration by health authorities.
Resumo:
BACKGROUND: People with neurological disease have a much higher risk of both faecal incontinence and constipation than the general population. There is often a fine dividing line between the two conditions, with any management intended to ameliorate, one risking precipitating the other. Bowel problems are observed to be the cause of much anxiety and may reduce quality of life in these people. Current bowel management is largely empirical with a limited research base. OBJECTIVES: To determine the effects of management strategies for faecal incontinence and constipation in people with neurological diseases affecting the central nervous system. SEARCH STRATEGY: We searched the Cochrane Incontinence Group Trials Register, the Cochrane Controlled Trials Register, MEDLINE, EMBASE and all reference lists of relevant articles. Date of the most recent searches: May 2000. SELECTION CRITERIA: All randomised or quasi-randomised trials evaluating any types of conservative, or surgical measure for the management of faecal incontinence and constipation in people with neurological diseases were selected. Specific therapies for the treatment of neurological diseases that indirectly affect bowel dysfunction have also been considered. DATA COLLECTION AND ANALYSIS: All three reviewers assessed the methodological quality of eligible trials and two reviewers independently extracted data from included trials using a range of pre-specified outcome measures. MAIN RESULTS: Only seven trials were identified by the search strategy and all were small and of poor quality. Oral medications for constipation were the subject of four trials. Cisapride does not seem to have clinically useful effects in people with spinal cord injuries (two trials). Psyllium was associated with increased stool frequency in people with Parkinson's disease but not altered colonic transit time (one trial). Some rectal preparations to initiate defecation produced faster results than others (one trial). Different time schedules for administration of rectal medication may produce different bowel responses (one trial). Mechanical evacuation may be more effective than oral or rectal medication (one trial). The clinical significance of any of these results is difficult to interpret. REVIEWER'S CONCLUSIONS: It is not possible to draw any recommendation for bowel care in people with neurological diseases from the trials included in this review. Bowel management for these people must remain empirical until well-designed controlled trials with adequate numbers and clinically relevant outcome measures become available.
Resumo:
The number of patients treated by haemodialysis (HD) is continuously increasing. The complications associated with vascular accesses represent the first cause of hospitalisation in these patients. Since 2001 nephrologists, surgeons, angiologists and radiologists at the CHUV are working to develop a multidisciplinary model that includes planning and monitoring of HD accesses. In this setting the echo-Doppler represents an important tool of investigation. Every patient is discussed and decisions are taken during a weekly multidisciplinary meeting. A network has been created with nephrologists of peripheral centres and other specialists. This model allows to centralize investigational information and coordinate patient care while keeping and even developing some investigational activities and treatment in peripheral centres.
Resumo:
Biological scaling analyses employing the widely used bivariate allometric model are beset by at least four interacting problems: (1) choice of an appropriate best-fit line with due attention to the influence of outliers; (2) objective recognition of divergent subsets in the data (allometric grades); (3) potential restrictions on statistical independence resulting from phylogenetic inertia; and (4) the need for extreme caution in inferring causation from correlation. A new non-parametric line-fitting technique has been developed that eliminates requirements for normality of distribution, greatly reduces the influence of outliers and permits objective recognition of grade shifts in substantial datasets. This technique is applied in scaling analyses of mammalian gestation periods and of neonatal body mass in primates. These analyses feed into a re-examination, conducted with partial correlation analysis, of the maternal energy hypothesis relating to mammalian brain evolution, which suggests links between body size and brain size in neonates and adults, gestation period and basal metabolic rate. Much has been made of the potential problem of phylogenetic inertia as a confounding factor in scaling analyses. However, this problem may be less severe than suspected earlier because nested analyses of variance conducted on residual variation (rather than on raw values) reveals that there is considerable variance at low taxonomic levels. In fact, limited divergence in body size between closely related species is one of the prime examples of phylogenetic inertia. One common approach to eliminating perceived problems of phylogenetic inertia in allometric analyses has been calculation of 'independent contrast values'. It is demonstrated that the reasoning behind this approach is flawed in several ways. Calculation of contrast values for closely related species of similar body size is, in fact, highly questionable, particularly when there are major deviations from the best-fit line for the scaling relationship under scrutiny.
Resumo:
In this paper, we consider a discrete-time risk process allowing for delay in claim settlement, which introduces a certain type of dependence in the process. From martingale theory, an expression for the ultimate ruin probability is obtained, and Lundberg-type inequalities are derived. The impact of delay in claim settlement is then investigated. To this end, a convex order comparison of the aggregate claim amounts is performed with the corresponding non-delayed risk model, and numerical simulations are carried out with Belgian market data.
Resumo:
Tämä kandidaatintyö tutkii tietotekniikan perusopetuksessa keskeisen aiheen,ohjelmoinnin, alkeisopetusta ja siihen liittyviä ongelmia. Työssä perehdytään ohjelmoinnin perusopetusmenetelmiin ja opetuksen lähestymistapoihin, sekä ratkaisuihin, joilla opetusta voidaan tehostaa. Näitä ratkaisuja työssä ovat mm. ohjelmointikielen valinta, käytettävän kehitysympäristön löytäminen sekä kurssia tukevien opetusapuvälineiden etsiminen. Lisäksi kurssin läpivientiin liittyvien toimintojen, kuten harjoitusten ja mahdollisten viikkotehtävien valinta kuuluu osaksitätä työtä. Työ itsessään lähestyy aihetta tutkimalla Pythonin soveltuvuutta ohjelmoinnin alkeisopetukseen mm. vertailemalla sitä muihin olemassa oleviin yleisiin opetuskieliin, kuten C, C++ tai Java. Se tarkastelee kielen hyviä ja huonoja puolia, sekä tutkii, voidaanko Pythonia hyödyntää luontevasti pääasiallisena opetuskielenä. Lisäksi työ perehtyy siihen, mitä kaikkea kurssilla tulisi opettaa, sekä siihen, kuinka kurssin läpivienti olisi tehokkainta toteuttaa ja minkälaiset tekniset puitteet kurssin toteuttamista varten olisi järkevää valita.
Resumo:
An alternative relation to Pareto-dominance relation is proposed. The new relation is based on ranking a set of solutions according to each separate objective and an aggregation function to calculate a scalar fitness value for each solution. The relation is called as ranking-dominance and it tries to tackle the curse of dimensionality commonly observedin evolutionary multi-objective optimization. Ranking-dominance can beused to sort a set of solutions even for a large number of objectives when Pareto-dominance relation cannot distinguish solutions from one another anymore. This permits search to advance even with a large number of objectives. It is also shown that ranking-dominance does not violate Pareto-dominance. Results indicate that selection based on ranking-dominance is able to advance search towards the Pareto-front in some cases, where selection based on Pareto-dominance stagnates. However, in some cases it is also possible that search does not proceed into direction of Pareto-front because the ranking-dominance relation permits deterioration of individual objectives. Results also show that when the number of objectives increases, selection based on just Pareto-dominance without diversity maintenance is able to advance search better than with diversity maintenance. Therefore, diversity maintenance is connive at the curse of dimensionality.
Resumo:
Abstract The main objective of this work is to show how the choice of the temporal dimension and of the spatial structure of the population influences an artificial evolutionary process. In the field of Artificial Evolution we can observe a common trend in synchronously evolv¬ing panmictic populations, i.e., populations in which any individual can be recombined with any other individual. Already in the '90s, the works of Spiessens and Manderick, Sarma and De Jong, and Gorges-Schleuter have pointed out that, if a population is struc¬tured according to a mono- or bi-dimensional regular lattice, the evolutionary process shows a different dynamic with respect to the panmictic case. In particular, Sarma and De Jong have studied the selection pressure (i.e., the diffusion of a best individual when the only selection operator is active) induced by a regular bi-dimensional structure of the population, proposing a logistic modeling of the selection pressure curves. This model supposes that the diffusion of a best individual in a population follows an exponential law. We show that such a model is inadequate to describe the process, since the growth speed must be quadratic or sub-quadratic in the case of a bi-dimensional regular lattice. New linear and sub-quadratic models are proposed for modeling the selection pressure curves in, respectively, mono- and bi-dimensional regu¬lar structures. These models are extended to describe the process when asynchronous evolutions are employed. Different dynamics of the populations imply different search strategies of the resulting algorithm, when the evolutionary process is used to solve optimisation problems. A benchmark of both discrete and continuous test problems is used to study the search characteristics of the different topologies and updates of the populations. In the last decade, the pioneering studies of Watts and Strogatz have shown that most real networks, both in the biological and sociological worlds as well as in man-made structures, have mathematical properties that set them apart from regular and random structures. In particular, they introduced the concepts of small-world graphs, and they showed that this new family of structures has interesting computing capabilities. Populations structured according to these new topologies are proposed, and their evolutionary dynamics are studied and modeled. We also propose asynchronous evolutions for these structures, and the resulting evolutionary behaviors are investigated. Many man-made networks have grown, and are still growing incrementally, and explanations have been proposed for their actual shape, such as Albert and Barabasi's preferential attachment growth rule. However, many actual networks seem to have undergone some kind of Darwinian variation and selection. Thus, how these networks might have come to be selected is an interesting yet unanswered question. In the last part of this work, we show how a simple evolutionary algorithm can enable the emrgence o these kinds of structures for two prototypical problems of the automata networks world, the majority classification and the synchronisation problems. Synopsis L'objectif principal de ce travail est de montrer l'influence du choix de la dimension temporelle et de la structure spatiale d'une population sur un processus évolutionnaire artificiel. Dans le domaine de l'Evolution Artificielle on peut observer une tendence à évoluer d'une façon synchrone des populations panmictiques, où chaque individu peut être récombiné avec tout autre individu dans la population. Déjà dans les année '90, Spiessens et Manderick, Sarma et De Jong, et Gorges-Schleuter ont observé que, si une population possède une structure régulière mono- ou bi-dimensionnelle, le processus évolutionnaire montre une dynamique différente de celle d'une population panmictique. En particulier, Sarma et De Jong ont étudié la pression de sélection (c-à-d la diffusion d'un individu optimal quand seul l'opérateur de sélection est actif) induite par une structure régulière bi-dimensionnelle de la population, proposant une modélisation logistique des courbes de pression de sélection. Ce modèle suppose que la diffusion d'un individu optimal suit une loi exponentielle. On montre que ce modèle est inadéquat pour décrire ce phénomène, étant donné que la vitesse de croissance doit obéir à une loi quadratique ou sous-quadratique dans le cas d'une structure régulière bi-dimensionnelle. De nouveaux modèles linéaires et sous-quadratique sont proposés pour des structures mono- et bi-dimensionnelles. Ces modèles sont étendus pour décrire des processus évolutionnaires asynchrones. Différentes dynamiques de la population impliquent strategies différentes de recherche de l'algorithme résultant lorsque le processus évolutionnaire est utilisé pour résoudre des problèmes d'optimisation. Un ensemble de problèmes discrets et continus est utilisé pour étudier les charactéristiques de recherche des différentes topologies et mises à jour des populations. Ces dernières années, les études de Watts et Strogatz ont montré que beaucoup de réseaux, aussi bien dans les mondes biologiques et sociologiques que dans les structures produites par l'homme, ont des propriétés mathématiques qui les séparent à la fois des structures régulières et des structures aléatoires. En particulier, ils ont introduit la notion de graphe sm,all-world et ont montré que cette nouvelle famille de structures possède des intéressantes propriétés dynamiques. Des populations ayant ces nouvelles topologies sont proposés, et leurs dynamiques évolutionnaires sont étudiées et modélisées. Pour des populations ayant ces structures, des méthodes d'évolution asynchrone sont proposées, et la dynamique résultante est étudiée. Beaucoup de réseaux produits par l'homme se sont formés d'une façon incrémentale, et des explications pour leur forme actuelle ont été proposées, comme le preferential attachment de Albert et Barabàsi. Toutefois, beaucoup de réseaux existants doivent être le produit d'un processus de variation et sélection darwiniennes. Ainsi, la façon dont ces structures ont pu être sélectionnées est une question intéressante restée sans réponse. Dans la dernière partie de ce travail, on montre comment un simple processus évolutif artificiel permet à ce type de topologies d'émerger dans le cas de deux problèmes prototypiques des réseaux d'automates, les tâches de densité et de synchronisation.
Resumo:
Abstract The solvability of the problem of fair exchange in a synchronous system subject to Byzantine failures is investigated in this work. The fair exchange problem arises when a group of processes are required to exchange digital items in a fair manner, which means that either each process obtains the item it was expecting or no process obtains any information on, the inputs of others. After introducing a novel specification of fair exchange that clearly separates safety and liveness, we give an overview of the difficulty of solving such a problem in the context of a fully-connected topology. On one hand, we show that no solution to fair exchange exists in the absence of an identified process that every process can trust a priori; on the other, a well-known solution to fair exchange relying on a trusted third party is recalled. These two results lead us to complete our system model with a flexible representation of the notion of trust. We then show that fair exchange is solvable if and only if a connectivity condition, named the reachable majority condition, is satisfied. The necessity of the condition is proven by an impossibility result and its sufficiency by presenting a general solution to fair exchange relying on a set of trusted processes. The focus is then turned towards a specific network topology in order to provide a fully decentralized, yet realistic, solution to fair exchange. The general solution mentioned above is optimized by reducing the computational load assumed by trusted processes as far as possible. Accordingly, our fair exchange protocol relies on trusted tamperproof modules that have limited communication abilities and are only required in key steps of the algorithm. This modular solution is then implemented in the context of a pedagogical application developed for illustrating and apprehending the complexity of fair exchange. This application, which also includes the implementation of a wide range of Byzantine behaviors, allows executions of the algorithm to be set up and monitored through a graphical display. Surprisingly, some of our results on fair exchange seem contradictory with those found in the literature of secure multiparty computation, a problem from the field of modern cryptography, although the two problems have much in common. Both problems are closely related to the notion of trusted third party, but their approaches and descriptions differ greatly. By introducing a common specification framework, a comparison is proposed in order to clarify their differences and the possible origins of the confusion between them. This leads us to introduce the problem of generalized fair computation, a generalization of fair exchange. Finally, a solution to this new problem is given by generalizing our modular solution to fair exchange
Resumo:
Despite the rapid change in today's business environment there are relatively few studies about corporate renewal. This study aims for its part at filling that research gap by studying the concepts of strategy, corporate renewal, innovation and corporate venturing. Its purpose is to enhance our understanding of how established companies operating in dynamic and global environment can benefit from their corporate venturing activities. The theoretical part approaches the research problem in corporate and venture levels. Firstly, it focuses on mapping the determinants of strategy and suggests using industry, location, resources, knowledge, structure and culture, market, technology and business model to assess the environment and using these determinants to optimize speed and magnitude of change.Secondly, it concludes that the choice of innovation strategy is dependent on the type and dimensions of innovation and suggests assessing market, technology, business model as well as novelty and complexity related to each of them for choosing an optimal context for developing innovations further. Thirdly, it directsattention on processes through which corporate renewal takes place. On corporate level these processes are identified as strategy formulation, strategy formation and strategy implementation. On the venture level the renewal processes are identified as learning, leveraging and nesting. The theoretical contribution of this study, the framework of strategic corporate venturing, joins corporate and venture level management issues together and concludes that strategy processes and linking processes are the mechanism through which continuous corporate renewaltakes place. The framework of strategic corporate venturing proposed by this study is a new way to illustrate the role of corporate venturing as a purposefullybuilt, different view of a company's business environment. The empirical part extended the framework by enhancing our understanding of the link between corporate renewal and corporate venturing in its real life environment in three Finnish companies: Metso, Nokia and TeliaSonera. Characterizing companies' environmentwith the determinants of strategy identified in this study provided a structured way to analyze their competitive position and renewal challenges that they arefacing. More importantly the case studies confirmed that a link between corporate renewal and corporate venturing exists and found out that the link is not as straight forward as indicated by the theory. Furthermore, the case studies enhanced the framework by indicating a sequence according to which the processes work. Firstly, the induced strategy processes strategy formulation and strategy implementation set the scene for corporate venturing context and management processes and leave strategy formation for the venture. Only after that can strategies formed by ventures come back to the corporate level - and if found viable in the corporate level be formalized through formulation and implementation. With the help of the framework of strategic corporate venturing the link between corporaterenewal and corporate venturing can be found and managed. The suggested response to the continuous need for change is continuous renewal i.e. institutionalizing corporate renewal in the strategy processes of the company. As far as benefiting from venturing is concerned the answer lies in deliberately managing venturing in a context different to the mainstream businesses and establishing efficientlinking processes to exploit the renewal potential of individual ventures.
Resumo:
Convective transport, both pure and combined with diffusion and reaction, can be observed in a wide range of physical and industrial applications, such as heat and mass transfer, crystal growth or biomechanics. The numerical approximation of this class of problemscan present substantial difficulties clue to regions of high gradients (steep fronts) of the solution, where generation of spurious oscillations or smearing should be precluded. This work is devoted to the development of an efficient numerical technique to deal with pure linear convection and convection-dominated problems in the frame-work of convection-diffusion-reaction systems. The particle transport method, developed in this study, is based on using rneshless numerical particles which carry out the solution along the characteristics defining the convective transport. The resolution of steep fronts of the solution is controlled by a special spacial adaptivity procedure. The serni-Lagrangian particle transport method uses an Eulerian fixed grid to represent the solution. In the case of convection-diffusion-reaction problems, the method is combined with diffusion and reaction solvers within an operator splitting approach. To transfer the solution from the particle set onto the grid, a fast monotone projection technique is designed. Our numerical results confirm that the method has a spacial accuracy of the second order and can be faster than typical grid-based methods of the same order; for pure linear convection problems the method demonstrates optimal linear complexity. The method works on structured and unstructured meshes, demonstrating a high-resolution property in the regions of steep fronts of the solution. Moreover, the particle transport method can be successfully used for the numerical simulation of the real-life problems in, for example, chemical engineering.
Resumo:
Over 70% of the total costs of an end product are consequences of decisions that are made during the design process. A search for optimal cross-sections will often have only a marginal effect on the amount of material used if the geometry of a structure is fixed and if the cross-sectional characteristics of its elements are property designed by conventional methods. In recent years, optimalgeometry has become a central area of research in the automated design of structures. It is generally accepted that no single optimisation algorithm is suitable for all engineering design problems. An appropriate algorithm, therefore, mustbe selected individually for each optimisation situation. Modelling is the mosttime consuming phase in the optimisation of steel and metal structures. In thisresearch, the goal was to develop a method and computer program, which reduces the modelling and optimisation time for structural design. The program needed anoptimisation algorithm that is suitable for various engineering design problems. Because Finite Element modelling is commonly used in the design of steel and metal structures, the interaction between a finite element tool and optimisation tool needed a practical solution. The developed method and computer programs were tested with standard optimisation tests and practical design optimisation cases. Three generations of computer programs are developed. The programs combine anoptimisation problem modelling tool and FE-modelling program using three alternate methdos. The modelling and optimisation was demonstrated in the design of a new boom construction and steel structures of flat and ridge roofs. This thesis demonstrates that the most time consuming modelling time is significantly reduced. Modelling errors are reduced and the results are more reliable. A new selection rule for the evolution algorithm, which eliminates the need for constraint weight factors is tested with optimisation cases of the steel structures that include hundreds of constraints. It is seen that the tested algorithm can be used nearly as a black box without parameter settings and penalty factors of the constraints.