957 resultados para Global localization problem
Resumo:
Perceiving the world visually is a basic act for humans, but for computers it is still an unsolved problem. The variability present innatural environments is an obstacle for effective computer vision. The goal of invariant object recognition is to recognise objects in a digital image despite variations in, for example, pose, lighting or occlusion. In this study, invariant object recognition is considered from the viewpoint of feature extraction. Thedifferences between local and global features are studied with emphasis on Hough transform and Gabor filtering based feature extraction. The methods are examined with respect to four capabilities: generality, invariance, stability, and efficiency. Invariant features are presented using both Hough transform and Gabor filtering. A modified Hough transform technique is also presented where the distortion tolerance is increased by incorporating local information. In addition, methods for decreasing the computational costs of the Hough transform employing parallel processing and local information are introduced.
Resumo:
In this work, we present an integral scheduling system for non-dedicated clusters, termed CISNE-P, which ensures the performance required by the local applications, while simultaneously allocating cluster resources to parallel jobs. Our approach solves the problem efficiently by using a social contract technique. This kind of technique is based on reserving computational resources, preserving a predetermined response time to local users. CISNE-P is a middleware which includes both a previously developed space-sharing job scheduler and a dynamic coscheduling system, a time sharing scheduling component. The experimentation performed in a Linux cluster shows that these two scheduler components are complementary and a good coordination improves global performance significantly. We also compare two different CISNE-P implementations: one developed inside the kernel, and the other entirely implemented in the user space.
Resumo:
Background: Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results: Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions: Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task.
Resumo:
This empirical study consists in an investigation of the effects, on the development of Information Problem Solving (IPS) skills, of a long-term embedded, structured and supported instruction in Secondary Education. Forty secondary students of 7th and 8th grades (13–15 years old) participated in the 2-year IPS instruction designed in this study. Twenty of them participated in the IPS instruction, and the remaining twenty were the control group. All the students were pre- and post-tested in their regular classrooms, and their IPS process and performance were logged by means of screen capture software, to warrant their ecological validity. The IPS constituent skills, the web search sub-skills and the answers given by each participant were analyzed. The main findings of our study suggested that experimental students showed a more expert pattern than the control students regarding the constituent skill ‘defining the problem’ and the following two web search sub-skills: ‘search terms’ typed in a search engine, and ‘selected results’ from a SERP. In addition, scores of task performance were statistically better in experimental students than in control group students. The paper contributes to the discussion of how well-designed and well-embedded scaffolds could be designed in instructional programs in order to guarantee the development and efficiency of the students’ IPS skills by using net information better and participating fully in the global knowledge society.
Resumo:
Käyttäjien tunnistaminen tietojärjestelmissä on ollut yksi tietoturvan kulmakivistä vuosikymmenten ajan. Ajatus käyttäjätunnuksesta ja salasanasta on kaikkein kustannustehokkain ja käytetyin tapa säilyttää luottamus tietojärjestelmän ja käyttäjien välillä. Tietojärjestelmien käyttöönoton alkuaikoina, jolloin yrityksissä oli vain muutamia tietojärjestelmiä ja niitä käyttivät vain pieni ryhmä käyttäjiä, tämä toimintamalli osoittautui toimivaksi. Vuosien mittaan järjestelmien määrä kasvoi ja sen mukana kasvoi salasanojen määrä ja monimuotoisuus. Kukaan ei osannut ennustaa, kuinka paljon salasanoihin liittyviä ongelmia käyttäjät kohtaisivat ja kuinka paljon ne tulisivat ruuhkauttamaan yritysten käyttäjätukea ja minkälaisia tietoturvariskejä salasanat tulisivat aiheuttamaan suurissa yrityksissä. Tässä diplomityössä tarkastelemme salasanojen aiheuttamia ongelmia suuressa, globaalissa yrityksessä. Ongelmia tarkastellaan neljästä eri näkökulmasta; ihmiset, teknologia, tietoturva ja liiketoiminta. Ongelmat osoitetaan esittelemällä tulokset yrityksen työntekijöille tehdystä kyselystä, joka toteutettiin osana tätä diplomityötä. Ratkaisu näihin ongelmiin esitellään keskitetyn salasanojenhallintajärjestelmän muodossa. Järjestelmän eri ominaisuuksia arvioidaan ja kokeilu -tyyppinen toteutus rakennetaan osoittamaan tällaisen järjestelmän toiminnallisuus.
Resumo:
Lyhytsanomiin perustuvat lisäarvopalvelut ovat viime vuosikymmenen lopulla kehittyneet nopeasti parhaimmin tuottaviksi mobiilin televerkon käyttötavoista. Nämä palvelut on monesti kehitetty nopeasti ottamatta huomioon mahdollisia siirrettävyysongelmia, joita palveluiden vieminen muihin kuin alkuperäiseen ympäristöön aiheuttaa. Tämän työn tarkoituksena on tutkia odotettavissa olevia teknisiä ongelmia vietäessä lyhytsanomapohjaisia palveluita kansainvälisille markkinoille. Ongelman ratkaisuna esitellään Intellitel Messaging Gateway (MGw) - yhdyskäytävä, joka tarjoaa mahdollisuuden avointen internet-protokollien kautta tarjottavaan lisäarvopalveluiden luontiin. Työn käytännön osuus koostuu valikoimasta pieniä suunnittelu- ja toteutustehtäviä, joiden tarkoituksena on korjata kansainvälistä verkkoonvientiä estäviä ominaisuuksia ja puutteita Intellitel MGw:ssä. Näistä ominaisuuksista tärkeimmät ovat merkistö-, osoitteistus- ja protokollayhteensopivuuden asettamat rajoitukset.
Resumo:
Työn tavoitteena oli määrittää myyntikonfiguraattorissa käytettävän tuotemallin yleinen rakenne. Ensin selvitettiin tuotemallin luomista ja konseptin suunnittelua kirjallisuuden ja asiantuntijoiden haastattelujen avulla. Asiantuntijoiden haastattelut toteutettiin vapaamuotoisesti kysymyslistaa apuna käyttäen. Tämän lisäksi työssä pohditaan sähköisen liiketoiminnan roolia sekä myyntikonfiguraattorin tulevaisuuden näkymiä. Diplomityössä käsitellään tuotemallia yleisellä tasolla. Toinen näkökulma käsittelee tuotemallia tietoteknisissä sovelluksissa käytettyjen menetelmien pohjalta. Tuotemallin muodostaminen aloitettiin asiakkaalle näkyvästä osasta eli myyntikonfiguraattorin ulkoasusta. Seuraava ongelma oli standardoida tuotetta ja tarjousta kuvaavat dokumentit globaalisti. Tähän ratkaisuun päädyttiin haastattelujen sekä asiantuntijoiden kokoontumisien pohjalta. Loppuosa diplomityöstä käsittelee myyntikonfiguraattorin asemaa kohdeyrityksen sähköisessä liiketoiminnassa sekä esittelee erään näkemyksen myyntikonfiguraattorin yhteenliittymästä asiakashallinta- ja tuotetiedonhallinta järjestelmiin. Diplomityössä saavutettiin asetetut tavoiteet: Myyntikonfigurattori yhtenäistää kohdeyrityksen hinnoittelua globaalisti, nopeuttaa tarjouksentekoprosessia, helpottaa uuden tuotteen lanseerausta ja standardoi tuotemallin globaalisti. Myyntikonfiguraattorin integrointi muihin tietojärjestelmiin tehostaa myynnin toimintoja. Haasteeksi jää loppukäyttäjien kannustaminen tehokkaaseen käyttöön sekä ylläpidon toteuttaminen. Ilman käyttäjiä ja heidän innostustaan voi projekti menettää johdon luottamuksen.
Resumo:
Considered as a remedy to multiple problems that our world is facing, biofuels are nowadays promoted on a global scale. Despite this globalised approach, however, biofuels are heavily contested. Not only the social implications of biofuels are disputed and uncertain, particularly in countries of the global South, but also their environmental and economic rationales. Given these huge controversies, policies promoting biofuels would seem difficult to maintain. Yet, support for them has been surprisingly well established on the political agendas. With the aim of understanding this puzzle, this study asks how the dominant approach to biofuels has been sustained on a global level. In order to answer this question, the meanings and assumptions in biofuel discourses are explored through the lens of Maarten Hajer’s “argumentative” discourse analysis. Based on the existence of a “partnership for sustainable bioenergy” between the EU, Brazil and Mozambique, the study takes these three locations as case studies. The analysis reveals that various discursive strategies, including a particular problem construction and the use of two main story-lines, have played an important role in ensuring the permanence of the global approach to biofuels. Moreover, while the discourse of critics against biofuels demonstrates that there is room for contestation, the analysis finds that the opponents’ discourse largely fails to target the most salient justification for biofuels. A more effective strategy for critics would therefore be to also question the problem constructions underpinning this main justification in the global discourse.
Resumo:
Simultaneous localization and mapping(SLAM) is a very important problem in mobile robotics. Many solutions have been proposed by different scientists during the last two decades, nevertheless few studies have considered the use of multiple sensors simultane¬ously. The solution is on combining several data sources with the aid of an Extended Kalman Filter (EKF). Two approaches are proposed. The first one is to use the ordinary EKF SLAM algorithm for each data source separately in parallel and then at the end of each step, fuse the results into one solution. Another proposed approach is the use of multiple data sources simultaneously in a single filter. The comparison of the computational com¬plexity of the two methods is also presented. The first method is almost four times faster than the second one.
Resumo:
Global warming mitigation has recently become a priority worldwide. A large body of literature dealing with energy related problems has focused on reducing greenhouse gases emissions at an engineering scale. In contrast, the minimization of climate change at a wider macroeconomic level has so far received much less attention. We investigate here the issue of how to mitigate global warming by performing changes in an economy. To this end, we make use of a systematic tool that combines three methods: linear programming, environmentally extended input output models, and life cycle assessment principles. The problem of identifying key economic sectors that contribute significantly to global warming is posed in mathematical terms as a bi criteria linear program that seeks to optimize simultaneously the total economic output and the total life cycle CO2 emissions. We have applied this approach to the European Union economy, finding that significant reductions in global warming potential can be attained by regulating specific economic sectors. Our tool is intended to aid policymakers in the design of more effective public policies for achieving the environmental and economic targets sought.
Resumo:
Organizing is a general problem for global firms. Firms are seeking a balance between responsiveness at the local level and efficiency through worldwide integration. In this, supply management is the focal point where external commercial supply market relations are connected with the firm's internal functions. Here, effective supplier relationship management (SRM) is essential. Global supply integration processes create new challenges for supply management professionals and new capabilities are required. Previous research has developed several models and tools for managers to manage and categorize different supplier relationship types, but the role of the firm's internal capability of managing supplier relationships in their global integration has been a clearly neglected issue. Hence, the main objective of this dissertation is to clarify how the capability of SRM may influence the firm's global competitiveness. This objective is divided into four research questions aiming to identify the elements of SRM capability, the internal factors of integration, the effect of SRM capability on strategy and how SRM capability is linked with global integration. The dissertation has two parts. The first part presents the theoretical approaches and practical implications from previous research and draws a synthesis on them. The second part comprises four empirical research papers addressing the research questions. Both qualitative and quantitative methods are utilized in this dissertation. The main contribution of this dissertation is that it aggregates the theoretical and conceptual perspectives applied to SRM research. Furthermore, given the lack of valid scales to measure capability, this study aimed to provide a foundation for an SRM capability scale by showing that the construct of SRM capability is formed of five separate elements. Moreover, SRM capability was found to be the enabler in efforts toward value chain integration. Finally, it was found that the effect of capability on global competitiveness is twofold: it reduces conflicts between responsiveness and integration, and it creates efficiency. Thus, by identifying and developing the firm's capabilities it is possible to improve performance, and hence, global competitiveness.
Resumo:
Aim of the Thesis is to study and understand the theoretical concept of Metanational corporation and understand how the Web 2.0 technologies can be used to support the theory. Empiric part of the study compares the theory to the case company’s current situation Goal of theoretical framework is to show how the Web 2.0 technologies can be used in the three levels of the Metanational corporation. In order to do this, knowledge management and more accurately knowledge transferring is studied to understand what is needed from the Web 2.0 technologies in the different functions and operations of the Metanational corporation. Final synthesis of the theoretical framework is to present a model where the Web 2.0 technologies are placed on the levels of the Metanational corporation. Empirical part of the study is based on interviews made in the case company. Aim of the interviews is to understand the current state of the company related to the theoretical framework. Based on the interviews, the differences between the theoretical concept and the case company are presented and studied. Finally the study presents the found problem areas, and where the adoption of the Web 2.0 tools is seen as beneficiary, based on the interviews and theoretical framework.
Resumo:
Många kvantitativa problem från vitt skilda områden kan beskrivas som optimeringsproblem. Ett mått på lösningens kvalitet bör optimeras samtidigt som vissa villkor på lösningen uppfylls. Kvalitetsmåttet kallas vanligen objektfunktion och kan beskriva kostnader (exempelvis produktion, logistik), potentialenergi (molekylmodellering, proteinveckning), risk (finans, försäkring) eller något annat relevant mått. I min doktorsavhandling diskuteras speciellt icke-linjär programmering, NLP, i ändliga dimensioner. Problem med enkel struktur, till exempel någon form av konvexitet, kan lösas effektivt. Tyvärr kan inte alla kvantitativa samband modelleras på ett konvext vis. Icke-konvexa problem kan angripas med heuristiska metoder, algoritmer som söker lösningar med hjälp av deterministiska eller stokastiska tumregler. Ibland fungerar det här väl, men heuristikerna kan sällan garantera kvaliteten på lösningen eller ens att en lösning påträffas. För vissa tillämpningar är det här oacceptabelt. Istället kan man tillämpa så kallad global optimering. Genom att successivt dela variabeldomänen i mindre delar och beräkna starkare gränser på det optimala värdet hittas en lösning inom feltoleransen. Den här metoden kallas branch-and-bound, ungefär dela-och-begränsa. För att ge undre gränser (vid minimering) approximeras problemet med enklare problem, till exempel konvexa, som kan lösas effektivt. I avhandlingen studeras tillvägagångssätt för att approximera differentierbara funktioner med konvexa underskattningar, speciellt den så kallade alphaBB-metoden. Denna metod adderar störningar av en viss form och garanterar konvexitet genom att sätta villkor på den perturberade Hessematrisen. Min forskning har lyft fram en naturlig utvidgning av de perturbationer som används i alphaBB. Nya metoder för att bestämma underskattningsparametrar har beskrivits och jämförts. I sammanfattningsdelen diskuteras global optimering ur bredare perspektiv på optimering och beräkningsalgoritmer.
Resumo:
Business intelligence (BI) is an information process that includes the activities and applications used to transform business data into valuable business information. Today’s enterprises are collecting detailed data which has increased the available business data drastically. In order to meet changing customer needs and gain competitive advantage businesses try to leverage this information. However, IT departments are struggling to meet the increased amount of reporting needs. Therefore, recent shift in the BI market has been towards empowering business users with self-service BI capabilities. The purpose of this study was to understand how self-service BI could help businesses to meet increased reporting demands. The research problem was approached with an empirical single case study. Qualitative data was gathered with a semi-structured, theme-based interview. The study found out that case company’s BI system was mostly used for group performance reporting. Ad-hoc and business user-driven information needs were mostly fulfilled with self-made tools and manual work. It was felt that necessary business information was not easily available. The concept of self-service BI was perceived to be helpful to meet such reporting needs. However, it was found out that the available data is often too complex for an average user to fully understand. The respondents felt that in order to self-service BI to work, the data has to be simplified and described in a way that it can be understood by the average business user. The results of the study suggest that BI programs struggle in meeting all the information needs of today’s businesses. The concept of self-service BI tries to resolve this problem by allowing users easy self-service access to necessary business information. However, business data is often complex and hard to understand. Self-serviced BI has to overcome this challenge before it can reach its potential benefits.
Resumo:
Business intelligence (BI) is an information process that includes the activities and applications used to transform business data into valuable business information. Today’s enterprises are collecting detailed data which has increased the available business data drastically. In order to meet changing customer needs and gain competitive advantage businesses try to leverage this information. However, IT departments are struggling to meet the increased amount of reporting needs. Therefore, recent shift in the BI market has been towards empowering business users with self-service BI capabilities. The purpose of this study was to understand how self-service BI could help businesses to meet increased reporting demands. The research problem was approached with an empirical single case study. Qualitative data was gathered with a semi-structured, theme-based interview. The study found out that case company’s BI system was mostly used for group performance reporting. Ad-hoc and business user-driven information needs were mostly fulfilled with self-made tools and manual work. It was felt that necessary business information was not easily available. The concept of self-service BI was perceived to be helpful to meet such reporting needs. However, it was found out that the available data is often too complex for an average user to fully understand. The respondents felt that in order to self-service BI to work, the data has to be simplified and described in a way that it can be understood by the average business user. The results of the study suggest that BI programs struggle in meeting all the information needs of today’s businesses. The concept of self-service BI tries to resolve this problem by allowing users easy self-service access to necessary business information. However, business data is often complex and hard to understand. Self-serviced BI has to overcome this challenge before it can reach its potential benefits.