30 resultados para Dependent Failures, Interactive Failures, Interactive Coefficients, Reliability, Complex System
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Langattomien lähiverkkotekniikoiden yleistyessä langattomien verkkojen ja palveluiden kysyntä on kasvanut nopeasti. Varsinkin vuonna 1997 julkistettu IEEE:n 802.11b-standardi on mahdollistanut langattomien verkkotekniikoiden nopean kehityksen. Tässä työssä esitetään suunnitelma kahden verkon välisen rajapinnan rakenteesta ja to-teutuksesta. Rajapintaa kutsutaan yhdysliikennepisteeksi. Sen pääasiallisena tehtävänä on toimia solmupisteenä kaikelle verkkojen väliselle tietoliikenteelle ja hallinnoida niin sisäverkkoa käyttäjineen kuin ulkoverkon puolelle kytkettyjä operaattoreita. Yhdyslii-kennepisteen tehtävänä on tunnistaa sisäverkon käyttäjät, auktorisoida heidät yhteis-työssä operaattorien kanssa, huolehtia sisäverkon käyttäjien verkko-osoitteista ja toimia verkkoliikenteen välittäjänä. Yhdysliikennepiste kykenee reitittämään käyttäjän oikealle operaattorille ja huolehtii siitä, että käyttäjällä on pääsy palveluihin, joiden käyttämiseen tällä on valtuutus. Työssä määritellään yhdysliikennepisteen rajapinnat sekä siihen liitettäviä operaattoreita että sisäverkkoon tarjottavia peruspalveluita varten. Lisäksi määritellään yhdysliikenne-pisteen sisäiset rajapinnat. Yhdysliikennepiste ei rajoita käytettyä verkkotekniikkaa, mutta tässä työssä keskitytään IEEE 802.11b -standardin mukaisiin WLAN-verkkoihin. Yhden tai useamman operaattorin verkkoja on olemassa sekä langallisessa että langat-tomissa ympäristöissä. Näissä verkoissa jokainen Internet-operaattori huolehtii kuiten-kin vain omista asiakkaistaan. Sisäverkko on suljettu, siihen pääsevät liittymään vain operaattorin omat asiakkaat. Työn tuloksena syntynyt yhdysliikennepiste on ratkaisu, jonka avulla voidaan rakentaa monioperaattorialueverkko, joka on avoin kaikille sen käyttäjille.
Resumo:
The primary objective is to identify the critical factors that have a natural impact on the performance measurement system. It is important to make correct decisions related to measurement systems, which are based on the complex business environment. The performance measurement system is combined with a very complex non-linear factor. The Six Sigma methodology is seen as one potential approach at every organisational level. It will be linked to the performance and financial measurement as well as to the analytical thinking on which the viewpoint of management depends. The complex systems are connected to the customer relationship study. As the primary throughput can be seen in a new well-defined performance measurement structure that will also be facilitated as will an analytical multifactor system. These critical factors should also be seen as a business innovation opportunity at the same time. This master's thesis has been divided into two different theoretical parts. The empirical part consists of both action-oriented and constructive research approaches with an empirical case study. The secondary objective is to seek a competitive advantage factor with a new analytical tool and the Six Sigma thinking. Process and product capabilities will be linked to the contribution of complex system. These critical barriers will be identified by the performance measuring system. The secondary throughput can be recognised as the product and the process cost efficiencies which throughputs are achieved with an advantage of management. The performance measurement potential is related to the different productivity analysis. Productivity can be seen as one essential part of the competitive advantage factor.
Resumo:
The rotational speed of high-speed electric machines is over 15 000 rpm. These machines are compact in size when compared to the power rate. As a consequence, the heat fluxes are at a high level and the adequacy of cooling becomes an important design criterion. In the high-speed machines, the air gap between the stator and rotor is a narrow flow channel. The cooling air is produced with a fan and the flow is then directed to the air gap. The flow in the gap does not provide sufficient cooling for the stator end windings, and therefore additional cooling is required. This study investigates the heat transfer and flow fields around the coil end windings when cooling jets are used. As a result, an innovative and new assembly is introduced for the cooling jets, with the benefits of a reduced amount of hot spots, a lower pressure drop, and hence a lower power need for the cooling fan. The gained information can also be applied to improve the cooling of electric machines through geometry modifications. The objective of the research is to determine the locations of the hot spots and to find out induced pressure losses with different jet alternatives. Several possibilities to arrange the extra cooling are considered. In the suggested approach cooling is provided by using a row of air jets. The air jets have three main tasks: to cool the coils effectively by direct impingement jets, to increase and cool down the flow that enters the coil end space through the air gap, and to ensure the correct distribution of the flow by forming an air curtain with additional jets. One important aim of this study is the arrangement of cooling jets in such manner that hot spots can be avoided to wide extent. This enables higher power density in high-speed motors. This cooling system can also be applied to the ordinary electric machines when efficient cooling is needed. The numerical calculations have been performed using a commercial Computational Fluid Dynamics software. Two geometries have been generated: cylindrical for the studied machine and Cartesian for the experimental model. The main parameters include the positions, arrangements and number of jets, the jet diameters, and the jet velocities. The investigated cases have been tested with two widely used turbulence models and using a computational grid of over 500 000 cells. The experimental tests have been made by using a simplified model for the end winding space with cooling jets. In the experiments, an emphasis has been given to flow visualisation. The computational analysis shows good agreement with the experimental results. Modelling of the cooling jet arrangement enables also a better understanding of the complex system of heat transfer at end winding space.
Resumo:
Web portaalit tarjoavat ainutlaatuisia apuvälineitä erilaisien sisältöjen luomiseksi, monenlaisia navigointipolkuja, henkilökohtaisia sivuja ja turvapalveluja. Portaali on monimutkainen systeemi, joka sisältää monta yhteistyötä tekevää komponenttia, yleensä toteutuu valmiiksi tehdyillä ongelmistoilla. Tämä tutkimus kansittelee portaalin toteutusta IBM/Tivolin tuotteella. Portaalin komponenttien integraatio on kriittinen koko järjestelmä arkkitehtuurille ja saattaa vaatia lisää ohjelmistokehittelyä. Tutkimuksen ensisijainen tavoite on kehittää räätälöityä komponenttia kahta portaali-alijärjestelmä varten, tilaaja - turvapalvelu. Tutkimuksessa Tivoli Personalized Services Manager (TPSM) ja Tivoli SecureWay Policy Director (PD) on tutkittu. Integraatio sisältää TPSM tietokaunan ja PD User Registry tiedon synkronisointia. Integraatio-ohjelmisto on suunniteltu ja tehty olemassaoloevien alijärjestelmien perusteella.
Resumo:
Tässä työssä tarkastellaan taajuusmuuttajan vanhenemista syklisissä käytöissä puolijohdetehokomponenttien osalta. Laitteiden vikaantumisprosessien analysoimiseksi työssä suunnitellaan syklinen kestotestausjärjestelmä, joka mahdollistaa useamman taajuusmuuttajan yhtäaikaisen vanhentamisen. Jaksottaisesti toistuvat kuormitussyklit rasittavat termomekaanisesti taajuusmuuttajan tehomoduulin sisäisiä rakenteita suurten lämpötilavaihtelujen johdosta. Teoriaosuuden pääpaino kohdistuu puolijohdetehokomponenttien rakenteeseen, vikaantumisprosesseihin ja eliniän kartoittamiseen. Työssä käydään läpi yleisimpien pienijännitteisten moottorinohjausinverttereiden tehomoduulien mekaaniset rakenteet, tyypillisemmät syklisestä kuormituksesta johtuvat vikaantumisprosessit sekä puolijohdetehokomponenttivalmistajien käyttämät syklisen eliniän testausmenetelmät. Loppuosassa työtä suunnitellaan taajuusmuuttajan syklinen kestotestausjärjestelmä laitteiden keinotekoista vanhentamista varten. Testausjärjestelmällä voidaan kuormittaa useampaa taajuusmuuttajaa vuorottain mielivaltaisella kuormitusvirtaprofiililla. Laitteita vanhennettiin kaksi testierää kuormittamalla niitä jaksottaisesti hissikäytön tyypillisellä kuormitusprofiililla. Puolijohdetehokomponentin vanhenemisen edistystä seurattiin termisen impedanssiketjun mittausmenetelmällä, joka perustuu IGBT:n kollektoriemitterijännitteen lämpötilariippuvuuteen. Testilaitteiden puolijohdetehokomponentit hajosivat syklisen eliniän päättymiseen teoriassa esitettyjen vikaantumisprosessien seurauksesta. Tehomoduulien vika-analyysi osoittaa syklisen kestotestausjärjestelmän soveltuvaksi menetelmäksi tutkia erilaisten kuormitusprofiilien vaikutusta taajuusmuuttajan vanhenemiseen.
Resumo:
Sustainability, in its modern meaning, has been discussed for more than forty years. However, many experts believe that humanity is still far from being sustainable. Some experts have argued that humanity should seek survivable development because it is too late for sustainable one, since 1990s. Obviously, some problems prevented humanity from becoming sustainable. This thesis focuses on the agenda of sustainability discussions and seeks for the essential topics missing from it. For this purpose, the research is conducted on 21 out of 33 books endorsed by the Club of Rome. All of these books are titled ‘a report to the Club of Rome’. The Club of Rome is an organization that has been constantly working on the problems of humankind for the past 40 years. This thesis has three main components: first, the messages of the reports to the Club of Rome, second, academics perceptions of the Club, and third, the Club member perceptions of its evolution, messages and missing topics. This thesis investigates the evolution of four aspects in the reports. The first one is the agenda of the reports. The second one is the basic approaches of the Club (i.e., global, long-term and holistic). The third one is the ways that the reports treat free market and growth ideology. The fourth one is the approach of the reports toward components of the global complex system (i.e., society, economy and politics). The outline of the thesis is as follows. First, the original reports are briefly summarized. After this, the academic perceptions are discussed and structured around three concepts (i.e., futures studies, sustainability and degrowth). In the final step, the perceptions of the experts are collected and analysed, using a variation of Delphi method, called ‘in-depth interviews’, and ‘quality content analysis’ method. This thesis is useful for those interested in sustainability, global problems, and the Club of Rome. This thesis concludes that the reports from 1972 up to 1980 were cohesive in discussing topics related to the problems of humankind. The topics of the reports are fragmented after this period. The basic approaches of CoR are visible in all the reports. However, after 1980, those approaches and especially holistic thinking are only visible in the background. Regarding the free market and growth ideology, although all the reports are against them, the early reports were more explicitly expressing their disagreement. A milestone is noticeable around 1980 when such objections went completely to the background. However, recent reports are more similar to those of 1970s both in adopting a holistic approach and in explicitly criticizing free market and growth ideology. Finally, concerning the components of global complex system, the society is excluded and the focus of the reports is on politics, economy and their relation. Concerning the topics missing from the debate, this thesis concludes that no major research has been conducted on the fundamental and underlying reasons of the problems (e.g. beliefs, values and culture). Studying the problems without considering their underlying reasons, obviously, leads to superficial and ineffective solutions. This might be one of the reasons that sustainability discussions have as yet led to no concrete result.
Resumo:
The desire to create a statistical or mathematical model, which would allow predicting the future changes in stock prices, was born many years ago. Economists and mathematicians are trying to solve this task by applying statistical analysis and physical laws, but there are still no satisfactory results. The main reason for this is that a stock exchange is a non-stationary, unstable and complex system, which is influenced by many factors. In this thesis the New York Stock Exchange was considered as the system to be explored. A topological analysis, basic statistical tools and singular value decomposition were conducted for understanding the behavior of the market. Two methods for normalization of initial daily closure prices by Dow Jones and S&P500 were introduced and applied for further analysis. As a result, some unexpected features were identified, such as a shape of distribution of correlation matrix, a bulk of which is shifted to the right hand side with respect to zero. Also non-ergodicity of NYSE was confirmed graphically. It was shown, that singular vectors differ from each other by a constant factor. There are for certain results no clear conclusions from this work, but it creates a good basis for the further analysis of market topology.
Resumo:
Laser cutting implementation possibilities into paper making machine was studied as the main objective of the work. Laser cutting technology application was considered as a replacement tool for conventional cutting methods used in paper making machines for longitudinal cutting such as edge trimming at different paper making process and tambour roll slitting. Laser cutting of paper was tested in 70’s for the first time. Since then, laser cutting and processing has been applied for paper materials with different level of success in industry. Laser cutting can be employed for longitudinal cutting of paper web in machine direction. The most common conventional cutting methods include water jet cutting and rotating slitting blades applied in paper making machines. Cutting with CO2 laser fulfils basic requirements for cutting quality, applicability to material and cutting speeds in all locations where longitudinal cutting is needed. Literature review provided description of advantages, disadvantages and challenges of laser technology when it was applied for cutting of paper material with particular attention to cutting of moving paper web. Based on studied laser cutting capabilities and problem definition of conventional cutting technologies, preliminary selection of the most promising application area was carried out. Laser cutting (trimming) of paper web edges in wet end was estimated to be the most promising area where it can be implemented. This assumption was made on the basis of rate of web breaks occurrence. It was found that up to 64 % of total number of web breaks occurred in wet end, particularly in location of so called open draws where paper web was transferred unsupported by wire or felt. Distribution of web breaks in machine cross direction revealed that defects of paper web edge was the main reason of tearing initiation and consequent web break. The assumption was made that laser cutting was capable of improvement of laser cut edge tensile strength due to high cutting quality and sealing effect of the edge after laser cutting. Studies of laser ablation of cellulose supported this claim. Linear energy needed for cutting was calculated with regard to paper web properties in intended laser cutting location. Calculated linear cutting energy was verified with series of laser cutting. Practically obtained laser energy needed for cutting deviated from calculated values. This could be explained by difference in heat transfer via radiation in laser cutting and different absorption characteristics of dry and moist paper material. Laser cut samples (both dry and moist (dry matter content about 25-40%)) were tested for strength properties. It was shown that tensile strength and strain break of laser cut samples are similar to corresponding values of non-laser cut samples. Chosen method, however, did not address tensile strength of laser cut edge in particular. Thus, the assumption of improving strength properties with laser cutting was not fully proved. Laser cutting effect on possible pollution of mill broke (recycling of trimmed edge) was carried out. Laser cut samples (both dry and moist) were tested on the content of dirt particles. The tests revealed that accumulation of dust particles on the surface of moist samples can take place. This has to be taken into account to prevent contamination of pulp suspension when trim waste is recycled. Material loss due to evaporation during laser cutting and amount of solid residues after cutting were evaluated. Edge trimming with laser would result in 0.25 kg/h of solid residues and 2.5 kg/h of lost material due to evaporation. Schemes of laser cutting implementation and needed laser equipment were discussed. Generally, laser cutting system would require two laser sources (one laser source for each cutting zone), set of beam transfer and focusing optics and cutting heads. In order to increase reliability of system, it was suggested that each laser source would have double capacity. That would allow to perform cutting employing one laser source working at full capacity for both cutting zones. Laser technology is in required level at the moment and do not require additional development. Moreover, capacity of speed increase is high due to availability high power laser sources what can support the tendency of speed increase of paper making machines. Laser cutting system would require special roll to maintain cutting. The scheme of such roll was proposed as well as roll integration into paper making machine. Laser cutting can be done in location of central roll in press section, before so-called open draw where many web breaks occur, where it has potential to improve runability of a paper making machine. Economic performance of laser cutting was done as comparison of laser cutting system and water jet cutting working in the same conditions. It was revealed that laser cutting would still be about two times more expensive compared to water jet cutting. This is mainly due to high investment cost of laser equipment and poor energy efficiency of CO2 lasers. Another factor is that laser cutting causes material loss due to evaporation whereas water jet cutting almost does not cause material loss. Despite difficulties of laser cutting implementation in paper making machine, its implementation can be beneficial. The crucial role in that is possibility to improve cut edge strength properties and consequently reduce number of web breaks. Capacity of laser cutting to maintain cutting speeds which exceed current speeds of paper making machines what is another argument to consider laser cutting technology in design of new high speed paper making machines.
Resumo:
The aim of this thesis research work focused on the carbonate precipitation of magnesium using magnesium hydroxide Mg(OH)2 and carbon dioxide (CO2) gas at ambient temperature and pressure. The rate of dissolution of Mg(OH)2 and precipitation kinetics were investigated under different operating conditions. The conductivity and pH of the solution were inline monitored by a Consort meter and the solid samples gotten from the precipitation reaction were analysed by a laser diffraction analyzer Malvern Mastersizer to obtain particle size distributions (PSD) of crystal samples. Also the Mg2+ concentration profiles were determined from the liquid phase of the precipitate by ion chromatography (IC) analysis. Crystal morphology of the obtained precipitates were also investigated and discussed in this work. For the carbonation reaction of magnesium hydroxide in the present work, it was found that magnesium carbonate trihydrate (nesquehonite) was the main product and its formation occurred at a pH of around 7-8. The stirrer speed has a significant effect on the dissolution rate of Mg(OH)2. The highest obtained Mg2+ concentration level was 0.424 mol L-l for the 470 rpm and 0.387 mol L-1 for the 560 rpm which corresponded to the processing time of 45 mins and 40 mins respectively. The particle size distribution shows that the average particle size keeps increasing during the reaction as the CO2 is been fed to the system. The carbonation process is kinetically favored and simple as nesquehonite formation occurs in a very short time. It is a thermodynamically and chemically stable solid product, which allows for a long-term storage of CO2. Since the carbonation reaction is a complex system which includes dissolution of magnesium hydroxide particles, absorption of CO2, chemical reaction and crystallization, the dissolution of magnesium hydroxide was studied in hydrochloric acid (HCl) solvent with and without nitrogen (N2) inert gas. It was found on the dissolution part that the impeller speed had effect on the dissolution rate. The higher the impeller speed the higher the pH of the solution, although for the highest speed of 650rpm it was not the case. Therefore, it was concluded that the optimum speed of the stirrer was 560rpm. The influence of inert gas N2 on the dissolution rate of Mg(OH)2 particles could be seen based on measured pH, electric conductivity and Mg2+ concentration curves.
THE COSTS OF RAISING EQUITY RATIO FOR BANKS Evidence from publicly listed banks operating in Finland
Resumo:
The solvency rate of banks differs from the other corporations. The equity rate of a bank is lower than it is in corporations of other field of business. However, functional banking industry has huge impact on the whole society. The equity rate of a bank needs to be higher because that makes the banking industry more stable as the probability of the banks going under will decrease. If a bank goes belly up, the government will be compensating the deposits since it has granted the bank’s depositors a deposit insurance. This means that the payment comes from the tax payers in the last resort. Economic conversation has long concentrated on the costs of raising equity ratio. It has been a common belief that raising equity ratio also increases the banks’ funding costs in the same phase and these costs will be redistributed to the banks customers as higher service charges. Regardless of the common belief, the actual reaction of the funding costs to the higher equity ratio has been studied only a little in Europe and no study has been constructed in Finland. Before it can be calculated whether the higher stability of the banking industry that is caused by the raise in equity levels compensates the extra costs in funding costs, it must be calculated how much the actual increase in the funding costs is. Currently the banking industry is controlled by complex and heavy regulation. To maintain such a complex system inflicts major costs in itself. This research leans on the Modigliani and Miller theory, which shows that the finance structure of a firm is irrelevant to their funding costs. In addition, this research follows the calculations of Miller, Yang ja Marcheggianon (2012) and Vale (2011) where they calculate the funding costs after the doubling of specific banks’ equity ratios. The Finnish banks studied in this research are Nordea and Danske Bank because they are the two largest banks operating in Finland and they both also have the right company form to able the calculations. To calculate the costs of halving their leverages this study used the Capital Asset Pricing Model. The halving of the leverage of Danske Bank raised its funding costs for 16—257 basis points depending on the method of assessment. For Nordea the increase in funding costs was 11—186 basis points when its leverage was halved. On the behalf of the results found in this study it can be said that the doubling of an equity ratio does not increase the funding costs of a bank one by one. Actually the increase is quite modest. More solvent banks would increase the stability of the banking industry enormously while the increase in funding costs is low. If the costs of bank regulation exceeds the increase in funding costs after the higher equity ratio, it can be thought that this is the better way of stabilizing the banking industry rather than heavy regulation.
THE COSTS OF RAISING EQUITY RATIO FOR BANKS Evidence from publicly listed banks operating in Finland
Resumo:
The solvency rate of banks differs from the other corporations. The equity rate of a bank is lower than it is in corporations of other field of business. However, functional banking industry has huge impact on the whole society. The equity rate of a bank needs to be higher because that makes the banking industry more stable as the probability of the banks going under will decrease. If a bank goes belly up, the government will be compensating the deposits since it has granted the bank’s depositors a deposit insurance. This means that the payment comes from the tax payers in the last resort. Economic conversation has long concentrated on the costs of raising equity ratio. It has been a common belief that raising equity ratio also increases the banks’ funding costs in the same phase and these costs will be redistributed to the banks customers as higher service charges. Regardless of the common belief, the actual reaction of the funding costs to the higher equity ratio has been studied only a little in Europe and no study has been constructed in Finland. Before it can be calculated whether the higher stability of the banking industry that is caused by the raise in equity levels compensates the extra costs in funding costs, it must be calculated how much the actual increase in the funding costs is. Currently the banking industry is controlled by complex and heavy regulation. To maintain such a complex system inflicts major costs in itself. This research leans on the Modigliani and Miller theory, which shows that the finance structure of a firm is irrelevant to their funding costs. In addition, this research follows the calculations of Miller, Yang ja Marcheggianon (2012) and Vale (2011) where they calculate the funding costs after the doubling of specific banks’ equity ratios. The Finnish banks studied in this research are Nordea and Danske Bank because they are the two largest banks operating in Finland and they both also have the right company form to able the calculations. To calculate the costs of halving their leverages this study used the Capital Asset Pricing Model. The halving of the leverage of Danske Bank raised its funding costs for 16—257 basis points depending on the method of assessment. For Nordea the increase in funding costs was 11—186 basis points when its leverage was halved. On the behalf of the results found in this study it can be said that the doubling of an equity ratio does not increase the funding costs of a bank one by one. Actually the increase is quite modest. More solvent banks would increase the stability of the banking industry enormously while the increase in funding costs is low. If the costs of bank regulation exceeds the increase in funding costs after the higher equity ratio, it can be thought that this is the better way of stabilizing the banking industry rather than heavy regulation.
Resumo:
Software faults are expensive and cause serious damage, particularly if discovered late or not at all. Some software faults tend to be hidden. One goal of the thesis is to figure out the status quo in the field of software fault elimination since there are no recent surveys of the whole area. Basis for a structural framework is proposed for this unstructured field, paying attention to compatibility and how to find studies. Bug elimination means are surveyed, including bug knowhow, defect prevention and prediction, analysis, testing, and fault tolerance. The most common research issues for each area are identified and discussed, along with issues that do not get enough attention. Recommendations are presented for software developers, researchers, and teachers. Only the main lines of research are figured out. The main emphasis is on technical aspects. The survey was done by performing searches in IEEE, ACM, Elsevier, and Inspect databases. In addition, a systematic search was done for a few well-known related journals from recent time intervals. Some other journals, some conference proceedings and a few books, reports, and Internet articles have been investigated, too. The following problems were found and solutions for them discussed. Quality assurance is testing only is a common misunderstanding, and many checks are done and some methods applied only in the late testing phase. Many types of static review are almost forgotten even though they reveal faults that are hard to be detected by other means. Other forgotten areas are knowledge of bugs, knowing continuously repeated bugs, and lightweight means to increase reliability. Compatibility between studies is not always good, which also makes documents harder to understand. Some means, methods, and problems are considered method- or domain-specific when they are not. The field lacks cross-field research.
Resumo:
The purpose of this thesis is twofold. The first and major part is devoted to sensitivity analysis of various discrete optimization problems while the second part addresses methods applied for calculating measures of solution stability and solving multicriteria discrete optimization problems. Despite numerous approaches to stability analysis of discrete optimization problems two major directions can be single out: quantitative and qualitative. Qualitative sensitivity analysis is conducted for multicriteria discrete optimization problems with minisum, minimax and minimin partial criteria. The main results obtained here are necessary and sufficient conditions for different stability types of optimal solutions (or a set of optimal solutions) of the considered problems. Within the framework of quantitative direction various measures of solution stability are investigated. A formula for a quantitative characteristic called stability radius is obtained for the generalized equilibrium situation invariant to changes of game parameters in the case of the H¨older metric. Quality of the problem solution can also be described in terms of robustness analysis. In this work the concepts of accuracy and robustness tolerances are presented for a strategic game with a finite number of players where initial coefficients (costs) of linear payoff functions are subject to perturbations. Investigation of stability radius also aims to devise methods for its calculation. A new metaheuristic approach is derived for calculation of stability radius of an optimal solution to the shortest path problem. The main advantage of the developed method is that it can be potentially applicable for calculating stability radii of NP-hard problems. The last chapter of the thesis focuses on deriving innovative methods based on interactive optimization approach for solving multicriteria combinatorial optimization problems. The key idea of the proposed approach is to utilize a parameterized achievement scalarizing function for solution calculation and to direct interactive procedure by changing weighting coefficients of this function. In order to illustrate the introduced ideas a decision making process is simulated for three objective median location problem. The concepts, models, and ideas collected and analyzed in this thesis create a good and relevant grounds for developing more complicated and integrated models of postoptimal analysis and solving the most computationally challenging problems related to it.
Resumo:
Automation technologies are widely acclaimed to have the potential to significantly reduce energy consumption and energy-related costs in buildings. However, despite the abundance of commercially available technologies, automation in domestic environments keep on meeting commercial failures. The main reason for this is the development process that is used to build the automation applications, which tend to focus more on technical aspects rather than on the needs and limitations of the users. An instance of this problem is the complex and poorly designed home automation front-ends that deter customers from investing in a home automation product. On the other hand, developing a usable and interactive interface is a complicated task for developers due to the multidisciplinary challenges that need to be identified and solved. In this context, the current research work investigates the different design problems associated with developing a home automation interface as well as the existing design solutions that are applied to these problems. The Qualitative Data Analysis approach was used for collecting data from research papers and the open coding process was used to cluster the findings. From the analysis of the data collected, requirements for designing the interface were derived. A home energy management functionality for a Web-based home automation front-end was developed as a proof-of-concept and a user evaluation was used to assess the usability of the interface. The results of the evaluation showed that this holistic approach to designing interfaces improved its usability which increases the chances of its commercial success.