43 resultados para cutting stock problem with setups
Resumo:
Tässä työssä arvioidaan terveydenhuollon sähköisten palveluiden käytettävyyttä. Tavoitteena on selvittää kuinka Hyvis.fi tyyppisen sivuston käytettävyyttä voidaan parantaa käyttäjän näkökulmasta. Käytännön työnä suoritettiin heuristinen arvio Hyvis.fi sivustolle ja sen kilpailijoiden ratkaisuille. Näin pyrittiin selvittämään syitä miksi Hyvis.fi sivuston käytettävyys ei ole yhtä hyvä kuin kilpailijoilla. Tuloksista selvisi, että todennäköisin syy on monimutkainen ja epälooginen dialogi käyttäjän kanssa. Sivuston rakenne ei ole yhtä yksinkertainen ja yhdenmukainen, kuin muissa testatuissa sivustoissa. Jatkossa on tehtävä käytettävyystestausta selvittääkseen, mitkä toiminnot ovat eniten ja vähiten käytettyjä. Eniten käytetyt toiminnot on tuotava paremmin esille käyttäjille ja vähiten käytetyt piilottaa tai poistaa täysin. Hyvis.fi sivuston tapauksessa huono dialogi vaikuttaa niin vakavalta ongelma, että muut käytettävyysongelmat johtuvat osittain siitä.
Resumo:
Ajoneuvoissa, kuten busseissa, käytetään yleensä 24 VDC järjestelmiä ja tämä ei muutu myöskään sähköajoneuvojen kohdalla. Sähköajoneuvoissakin tarvitaan siis 24 VDC matalajänniteakustoja valoille, pyyhkijöille ja muille matalan jännitteen järjestelmille. Lisäksi sähköajoneuvoissa on esimerkiksi ilmastointi ja paineilmankompressori, jotka tarvitsevat taajuusmuuttajan pyörittämään niitä. Tässä työssä suunnitellaan suuren virran piilevy DC/DC-muuntimeen, joka on osa ajoneuvokäyttöön suunnitellun invertterin ja DC/DC-muuntimen yhdistelmälaitetta. Työn pääpaino on piirilevyn suunnittelussa, mutta työssä kerrotaan lyhyesti koko laitteen kytkentä ja käyttötarkoitus. Työssä kerrotaan myös tehopiirilevylle tulevien komponenttien valinta, mitoitus ja jäähdytys. Käydään läpi suuren virran piirilevysuunnittelun mitoitusperiaatteet ja mitä seikkoja siinä erityisesti tulee ottaa huomioon. Lisäksi käsitellään piirilevyn liityntöjä ja virtakiskojen lämpenemää virranahtautumisen takia. Suunniteltua piirilevyä mitataan ja sen toimintaa kokeillaan prototyyppilaitteessa. Protoyyppilaitteella havaitaan virtakiskojen lämpenevän liikaa ja huomataan ongelma kytkenssä. Kytkentää korjattiin ja toimintaa analysoitiin uudestaan, jonka jälkeen havaittiin piirilevyn lämpenemän tippuneen 20 °C. Lopputuloksena piirilevyn lämpenemä, korjatulla kytkennällä, on suunnitellun mukainen. Lopussa esitetään piirilevyn korvaamista moduuliratkaisulla laitteen parantamiseksi sarjatuotantoon.
Resumo:
Diplomityössä tutkittiin höyryturbiinin ulosvirtauskanavistojen kokeellisia tutkimusmenetelmiä ja suoritettiin käytännön mittauksia Fortum Oyj:n Loviisan ydinvoimalaitoksen höyryturbiinien huuvan pienoismallilla. Kirjallisuusselvityksen perusteella todettiin, että pienoismallitutkimuksella on ollut keskeinen asema ulosvirtauskanavistojen suunnittelussa. Kokeellisten menetelmien perusongelmana on höyryturbiinin ulosvirtausolosuhteiden jäljitteleminen. Käytetyt mittausmenetelmät perustuvat pääosin tavanomaisiin paine- ja nopeusmittauksiin. Lisäainepartikkeleihin ja laser-valaisuun perustuva PIV (particle image velocimetry) todettiin lupaavaksi menetelmäksi ulosvirtauskanavistojen tutkimuksen saralla. Työn käytännön osuudessa tehtiin mittauksia mittasuhteessa 1:8 rakennetulle höyryturbiinin huuvan pienoismallille. Mittauksilla tutkittiin virtausta mallin sisääntulo- ja ulostulotasoissa. Lisäksi mitattiin staattisen paineen jakauma huuvan sisällä. Kokonaispainetta mittaava kiel-putki todettiin käytännölliseksi työkaluksi huuvan virtauskentän tutkimuksessa. Tuloksista käy hyvin ilmi huuvan ulostuloon syntyvien pyörteiden muodostuminen ja ulostulon epätasainen nopeusjakauma. Staattinen paine huuvan sisällä havaittiin epätasaisesti jakautuneeksi. Ulostulotason ja staattisen paineen mittauksilla saadut tulokset sopivat hyvin yhteen kirjallisuudesta löytyvien tutkimustulosten kanssa ja tukevat Loviisan ulosvirtauskanavistosta aiemmin tehtyjä CFD-simulointeja.
Resumo:
The vast majority of our contemporary society owns a mobile phone, which has resulted in a dramatic rise in the amount of networked computers in recent years. Security issues in the computers have followed the same trend and nearly everyone is now affected by such issues. How could the situation be improved? For software engineers, an obvious answer is to build computer software with security in mind. A problem with building software with security is how to define secure software or how to measure security. This thesis divides the problem into three research questions. First, how can we measure the security of software? Second, what types of tools are available for measuring security? And finally, what do these tools reveal about the security of software? Measuring tools of these kind are commonly called metrics. This thesis is focused on the perspective of software engineers in the software design phase. Focus on the design phase means that code level semantics or programming language specifics are not discussed in this work. Organizational policy, management issues or software development process are also out of the scope. The first two research problems were studied using a literature review while the third was studied using a case study research. The target of the case study was a Java based email server called Apache James, which had details from its changelog and security issues available and the source code was accessible. The research revealed that there is a consensus in the terminology on software security. Security verification activities are commonly divided into evaluation and assurance. The focus of this work was in assurance, which means to verify one’s own work. There are 34 metrics available for security measurements, of which five are evaluation metrics and 29 are assurance metrics. We found, however, that the general quality of these metrics was not good. Only three metrics in the design category passed the inspection criteria and could be used in the case study. The metrics claim to give quantitative information on the security of the software, but in practice they were limited to evaluating different versions of the same software. Apart from being relative, the metrics were unable to detect security issues or point out problems in the design. Furthermore, interpreting the metrics’ results was difficult. In conclusion, the general state of the software security metrics leaves a lot to be desired. The metrics studied had both theoretical and practical issues, and are not suitable for daily engineering workflows. The metrics studied provided a basis for further research, since they pointed out areas where the security metrics were necessary to improve whether verification of security from the design was desired.
Resumo:
As unregistered grassroots charities do not appear in official statistics in China, they tend to remain unnoticed by scholars. Also as they operate unofficially and avoid publicity, their work is usually not reported by the media. In this research I explore the grassroots charity activity of one pop music fan club from the viewpoint of trust as a sociological concept. I will also establish the general situation on charity in China. By using textual analysis on internet blogs and discussion forums I map the charity project from the discussion of the original idea to the execution and follow up phase. I study the roles the fan club members assume during the project as anonymous participants of internet conversations, as well as concrete active charity volunteers outside of the virtual world. I establish parties, other than the fan club, which are involved in the charity project. Interviews with one of the participant of the project in 2010, 2014 and 2015 bring valuable additional information and help in distributing the questionnaire survey. A quantitative questionnaire survey was distributed among the fan club members to get more detailed information on the motives and attitudes towards official and unofficial charity in China. Because of the inequality in China, the rural minority areas do not have similar educational opportunities as the mostly majority inhabited urban areas, even though the country officially has a nine year compulsory education. Grassroots charities can operate in relative freedom taking some of the government’s burden of social responsibilities if they are not criticizing the authorities. The problem with grassroots charity seems to be lack of sustainability. The lack of trust for authorities and official charities was the reason why the Jane Zhang fan club decided to conduct a charity case unofficially. As a group of people previously unknown to each other, they managed to build mutual trust to carry out the project transparently and successfully, though not sustainably. The internet has provided a new and effective platform for unofficial grassroots charities, who choose not to co-operate with official organisations. On grassroots level charities can have the transparency and trust that lack from official charities. I suggest, that interviewing the real persons behind the internet aliases and finding out what happened outside the discussion forums, would bring a more detailed and outspoken description of the project concerning of the contacts with the local authorities. Also travelling to the site and communicating with the local people in the village would establish how they have experienced the project.
Resumo:
Hintakilpailu ja muuttuneet maataloussäädökset aiheuttavat kuivikemarkkinoil-le painetta kehittyä. Yleisimpien kuivikkeiden, kuten turpeen ja puukuivikkei-den, saatavuusongelmat ja hinnannousu antavat mahdollisuuden myös muiden kuivikkeiden menestymiselle. Vuosina 2009-2014 Ekovilla Oy oli mukana karjatalouden kuivikekilpailussa paperisilpusta valmistetulla kuivikkeellaan. Yhteistyö markkinointiyrityksen kanssa pakotti hinnan liian korkeaksi, joka vaikutti negatiivisesti tuotteen ky-syntään. Myös pölyävyysongelma oli olemassa. Markkinoilla uskotaan yhä olevan kysyntää kierrätysmateriaalista valmistetulle kuivikkeelle, mikäli hinta saadaan sopivaksi. Ekovillalla kuivikkeen tuotanto-kalusto, markkinointikanavat sekä logistinen verkosto ovat valmiina. Tämän diplomityön avulla selvitetään, miten markkinoille kannattaisi palata ja mitä tulee huomioida. Tutkimuksessa hyödynnetään niin internetiä kuin aiempien asiakkaiden kokemuksia. Asiakashaastatteluissa mielenkiintoa eko-parsikuivikkeeseen on havaittavissa. Erityisesti, jos tuotteen hintaa saadaan laskettua. Aiemmin asiakkaat olivat pää-tyneet tilamaan Ekovillan kuiviketta pääasiassa tilanteessa, jossa muita kuivikkeita ei saanut. Osalle vastanneista pölyävyys oli niin suuri ongelma, että tuotetta ei voitu käyttää. Mutta myös positiivisia kommentteja käytettävyydestä ja erityisesti imukyvystä mainittiin. Mahdollisuudet markkinoille palaamiseen ovat olemassa, mutta markkinointi tulee olemaan haastavaa. Eräs kehityskohde Ekovillalla on kuivikkeen tarjoaminen lemmikkieläin-markkinoille. Tämä vaatisi panostusta tuotteen lisäkehitykseen sekä laite-investointeja. Markkinat eroavat maatiloista huomattavasti muun muassa pakkauskokojen, jakelukanavien ja hinnoittelun osalta.
Resumo:
One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.
Resumo:
The goal of Vehicle Routing Problems (VRP) and their variations is to transport a set of orders with the minimum number of vehicles at least cost. Most approaches are designed to solve specific problem variations independently, whereas in real world applications, different constraints are handled concurrently. This research extends solutions obtained for the traveling salesman problem with time windows to a much wider class of route planning problems in logistics. The work describes a novel approach that: supports a heterogeneous fleet of vehicles dynamically reduces the number of vehicles respects individual capacity restrictions satisfies pickup and delivery constraints takes Hamiltonian paths (rather than cycles) The proposed approach uses Monte-Carlo Tree Search and in particular Nested Rollout Policy Adaptation. For the evaluation of the work, real data from the industry was obtained and tested and the results are reported.
Resumo:
The thin disk and fiber lasers are new solid-state laser technologies that offer a combinationof high beam quality and a wavelength that is easily absorbed by metal surfacesand are expected to challenge the CO2 and Nd:YAG lasers in cutting of metals ofthick sections (thickness greater than 2mm). This thesis studied the potential of the disk and fiber lasers for cutting applications and the benefits of their better beam quality. The literature review covered the principles of the disk laser, high power fiber laser, CO2 laser and Nd:YAG laser as well as the principle of laser cutting. The cutting experiments were made with thedisk, fiber and CO2 lasers using nitrogen as an assist gas. The test material was austenitic stainless steel of sheet thickness 1.3mm, 2.3mm, 4.3mm and 6.2mm for the disk and fiber laser cutting experiments and sheet thickness of 1.3mm, 1.85mm, 4.4mm and 6.4mm for the CO2 laser cutting experiments. The experiments focused on the maximum cutting speeds with appropriate cut quality. Kerf width, cutedge perpendicularity and surface roughness were the cut characteristics used to analyze the cut quality. Attempts were made to draw conclusions on the influence of high beam quality on the cutting speed and cut quality. The cutting speeds were enormous for the disk and fiber laser cutting experiments with the 1.3mm and 2.3mm sheet thickness and the cut quality was good. The disk and fiber laser cutting speeds were lower at 4.3mm and 6.2mm sheet thickness but there was still a considerable percentage increase in cutting speeds compared to the CO2 laser cutting speeds at similar sheet thickness. However, the cut quality for 6.2mm thickness was not very good for the disk and fiber laser cutting experiments but could probably be improved by proper selection of cutting parameters.
Resumo:
This thesis discusses the basic problem of the modern portfolio theory about how to optimise the perfect allocation for an investment portfolio. The theory provides a solution for an efficient portfolio, which minimises the risk of the portfolio with respect to the expected return. A central feature for all the portfolios on the efficient frontier is that the investor needs to provide the expected return for each asset. Market anomalies are persistent patterns seen in the financial markets, which cannot be explained with the current asset pricing theory. The goal of this thesis is to study whether these anomalies can be observed among different asset classes. Finally, if persistent patterns are found, it is investigated whether the anomalies hold valuable information for determining the expected returns used in the portfolio optimization Market anomalies and investment strategies based on them are studied with a rolling estimation window, where the return for the following period is always based on historical information. This is also crucial when rebalancing the portfolio. The anomalies investigated within this thesis are value, momentum, reversal, and idiosyncratic volatility. The research data includes price series of country level stock indices, government bonds, currencies, and commodities. The modern portfolio theory and the views given by the anomalies are combined by utilising the Black-Litterman model. This makes it possible to optimise the portfolio so that investor’s views are taken into account. When constructing the portfolios, the goal is to maximise the Sharpe ratio. Significance of the results is studied by assessing if the strategy yields excess returns in a relation to those explained by the threefactormodel. The most outstanding finding is that anomaly based factors include valuable information to enhance efficient portfolio diversification. When the highest Sharpe ratios for each asset class are picked from the test factors and applied to the Black−Litterman model, the final portfolio results in superior riskreturn combination. The highest Sharpe ratios are provided by momentum strategy for stocks and long-term reversal for the rest of the asset classes. Additionally, a strategy based on the value effect was highly appealing, and it basically performs as well as the previously mentioned Sharpe strategy. When studying the anomalies, it is found, that 12-month momentum is the strongest effect, especially for stock indices. In addition, a high idiosyncratic volatility seems to be positively correlated with country indices on stocks.
Resumo:
Thermal cutting methods, are commonly used in the manufacture of metal parts. Thermal cutting processes separate materials by using heat. The process can be done with or without a stream of cutting oxygen. Common processes are Oxygen, plasma and laser cutting. It depends on the application and material which cutting method is used. Numerically-controlled thermal cutting is a cost-effective way of prefabricating components. One design aim is to minimize the number of work steps in order to increase competitiveness. This has resulted in the holes and openings in plate parts manufactured today being made using thermal cutting methods. This is a problem from the fatigue life perspective because there is local detail in the as-welded state that causes a rise in stress in a local area of the plate. In a case where the static utilization of a net section is full used, the calculated linear local stresses and stress ranges are often over 2 times the material yield strength. The shakedown criteria are exceeded. Fatigue life assessment of flame-cut details is commonly based on the nominal stress method. For welded details, design standards and instructions provide more accurate and flexible methods, e.g. a hot-spot method, but these methods are not universally applied to flame cut edges. Some of the fatigue tests of flame cut edges in the laboratory indicated that fatigue life estimations based on the standard nominal stress method can give quite a conservative fatigue life estimate in cases where a high notch factor was present. This is an undesirable phenomenon and it limits the potential for minimizing structure size and total costs. A new calculation method is introduced to improve the accuracy of the theoretical fatigue life prediction method of a flame cut edge with a high stress concentration factor. Simple equations were derived by using laboratory fatigue test results, which are published in this work. The proposed method is called the modified FAT method (FATmod). The method takes into account the residual stress state, surface quality, material strength class and true stress ratio in the critical place.