895 resultados para cost model
Resumo:
The main objective of this Master’s thesis is to develop a cost allocation model for a leading food industry company in Finland. The goal is to develop an allocation method for fixed overhead expenses produced in a specific production unit and create a plausible tracking system for product costs. The second objective is to construct an allocation model and modify the created model to be suited for other units as well. Costs, activities, drivers and appropriate allocation methods are studied. This thesis is started with literature review of existing theory of ABC, inspecting cost information and then conducting interviews with officials to get a general view of the requirements for the model to be constructed. The familiarization of the company started with becoming acquainted with the existing cost accounting methods. The main proposals for a new allocation model were revealed through interviews, which were utilized in setting targets for developing the new allocation method. As a result of this thesis, an Excel-based model is created based on the theoretical and empiric data. The new system is able to handle overhead costs in more detail improving the cost awareness, transparency in cost allocations and enhancing products’ cost structure. The improved cost awareness is received by selecting the best possible cost drivers for this situation. Also the capacity changes are taken into consideration, such as usage of practical or normal capacity instead of theoretical is suggested to apply. Also some recommendations for further development are made about capacity handling and cost collection.
Resumo:
Animal models of intervertebral disc degeneration play an important role in clarifying the physiopathological mechanisms and testing novel therapeutic strategies. The objective of the present study is to describe a simple animal model of disc degeneration involving Wistar rats to be used for research studies. Disc degeneration was confirmed and classified by radiography, magnetic resonance and histological evaluation. Adult male Wistar rats were anesthetized and submitted to percutaneous disc puncture with a 20-gauge needle on levels 6-7 and 8-9 of the coccygeal vertebrae. The needle was inserted into the discs guided by fluoroscopy and its tip was positioned crossing the nucleus pulposus up to the contralateral annulus fibrosus, rotated 360° twice, and held for 30 s. To grade the severity of intervertebral disc degeneration, we measured the intervertebral disc height from radiographic images 7 and 30 days after the injury, and the signal intensity T2-weighted magnetic resonance imaging. Histological analysis was performed with hematoxylin-eosin and collagen fiber orientation using picrosirius red staining and polarized light microscopy. Imaging and histological score analyses revealed significant disc degeneration both 7 and 30 days after the lesion, without deaths or systemic complications. Interobserver histological evaluation showed significant agreement. There was a significant positive correlation between histological score and intervertebral disc height 7 and 30 days after the lesion. We conclude that the tail disc puncture method using Wistar rats is a simple, cost-effective and reproducible model for inducing disc degeneration.
Resumo:
Software is a key component in many of our devices and products that we use every day. Most customers demand not only that their devices should function as expected but also that the software should be of high quality, reliable, fault tolerant, efficient, etc. In short, it is not enough that a calculator gives the correct result of a calculation, we want the result instantly, in the right form, with minimal use of battery, etc. One of the key aspects for succeeding in today's industry is delivering high quality. In most software development projects, high-quality software is achieved by rigorous testing and good quality assurance practices. However, today, customers are asking for these high quality software products at an ever-increasing pace. This leaves the companies with less time for development. Software testing is an expensive activity, because it requires much manual work. Testing, debugging, and verification are estimated to consume 50 to 75 per cent of the total development cost of complex software projects. Further, the most expensive software defects are those which have to be fixed after the product is released. One of the main challenges in software development is reducing the associated cost and time of software testing without sacrificing the quality of the developed software. It is often not enough to only demonstrate that a piece of software is functioning correctly. Usually, many other aspects of the software, such as performance, security, scalability, usability, etc., need also to be verified. Testing these aspects of the software is traditionally referred to as nonfunctional testing. One of the major challenges with non-functional testing is that it is usually carried out at the end of the software development process when most of the functionality is implemented. This is due to the fact that non-functional aspects, such as performance or security, apply to the software as a whole. In this thesis, we study the use of model-based testing. We present approaches to automatically generate tests from behavioral models for solving some of these challenges. We show that model-based testing is not only applicable to functional testing but also to non-functional testing. In its simplest form, performance testing is performed by executing multiple test sequences at once while observing the software in terms of responsiveness and stability, rather than the output. The main contribution of the thesis is a coherent model-based testing approach for testing functional and performance related issues in software systems. We show how we go from system models, expressed in the Unified Modeling Language, to test cases and back to models again. The system requirements are traced throughout the entire testing process. Requirements traceability facilitates finding faults in the design and implementation of the software. In the research field of model-based testing, many new proposed approaches suffer from poor or the lack of tool support. Therefore, the second contribution of this thesis is proper tool support for the proposed approach that is integrated with leading industry tools. We o er independent tools, tools that are integrated with other industry leading tools, and complete tool-chains when necessary. Many model-based testing approaches proposed by the research community suffer from poor empirical validation in an industrial context. In order to demonstrate the applicability of our proposed approach, we apply our research to several systems, including industrial ones.
Resumo:
The aging process of alcoholic beverages is generally conducted in wood barrels made with species from Quercus sp. Due to the high cost and the lack of viability of commercial production of these trees in Brazil, there is demand for new alternatives to using other native species and the incorporation of new technologies that enable greater competitiveness of sugar cane spirit aged in Brazilian wood. The drying of wood, the thermal treatment applied to it, and manufacturing techniques are important tools in defining the sensory quality of alcoholic beverages after being placed in contact with the barrels. In the thermal treatment, several compounds are changed by the application of heat to the wood and various studies show the compounds are modified, different aromas are developed, there is change in color, and beverages achieve even more pleasant taste, when compared to non-treated woods. This study evaluated the existence of significant differences between hydro-alcoholic solutions of sugar cane spirits elaborated from different species of thermo-treated and non-treated wood in terms of aroma. An acceptance test was applied to evaluate the solutions preferred by tasters under specific test conditions.
Resumo:
This study is based on a large survey study of over 1500 Finnish companies’ usage, needs and implementation difficulties of management accounting systems. The study uses quantitative, qualitative and mixed methods to answer the research questions. The empirical data used in the study was gathered through structured interviews with randomly selected companies of varying sizes and industries. The study answers the three research questions by analyzing the characteristics and behaviors of companies working in Finland. The study found five distinctive groups of companies according to the characteristics of their cost information and management accounting system use. The study also showed that the state of cost information and management accounting systems depends on the industry and size of the companies. It was found that over 50% of the companies either did not know how their systems could be updated or saw systems as inadequate. The qualitative side also highlighted the needs for tailored and integrated management accounting systems for creating more value to the managers of companies. The major inhibitors of new system implementation were the lack of both monetary and human resources. Through the use of mixed methods and design science a new and improved sophistication model is created based on previous research results combined with the information gathered from previous literature. The sophistication model shows the different stages of management accounting systems in use and what companies can achieve with the implementation and upgrading of their systems.
Resumo:
In the industry of the case company, transportation and warehousing costs account for more than 10% of the total cost which is more than on average. A Finnish company has an understanding that by sending larger shipments in parcels, they could save tens of thousands of euros annually in freight costs in Finland’s domestic shipments. To achieve these savings and optimize total logistics cost, company’s interest is to find out which is the cost efficient way of shipping road shipments of certain volumes; in parcel boxes or on pallets, and what should be the split volume determining the shipment type. Distribution center (DC) costs affect this decision and therefore they need to be also evaluated to determine the total logistics cost savings. Main results were achieved by executing activity-based costing-calculations including DC and road freight costs to determine the ideal split volume with which the total logistics cost is optimal. Calculations were done for Finland’s DC, separately for two main road freight destinations, Finland and Sweden, which cover 50% of road shipment spend. Data for calculations was collected both manually and automatically from various internal and external sources, such as the company ERP system and logistics service providers’ (LSP) reporting. DC processes were studied in practice and compared to model processes. Currently used freight rates were compared to existing pricing models and freight service tendering process was evaluated by participating in the process and comparing it to the models based on literature. The results show that the potential savings are not as significant as the company hoped for, mainly because of packing work increasing DC labor cost. Annual savings by setting ideal split volume per country would account for 0,4 % of the warehousing and transportation costs of shipments in scope of this thesis. Split volume should be set separately for each route, mainly because the pricing model for road freight is different in each country. For some routes bigger parcels should be sent but for some routes pallets should be used more. Next step is to do these calculations for remaining routes to determine total savings potential. Other findings show that the processes in the DC are designed well and the company could achieve savings by executing tenders more efficiently. Company should also pay more attention to parcel pricing and packing the shipments accordingly.
Resumo:
The main aim of this research was to develop cost of poor quality calculation model which will better reflect business impacts of lost productivity caused by IT incidents for the case company. This objective was pursued by reviewing literature and conducting a study in a Finnish multinational manufacturing company. Broad analysis of the scientific literature allowed to identify main theories and models of Cost of Poor Quality and provided better base for development of measurements of business impacts of lost productivity. Empirical data was gathered with semi-structured interviews and internet based survey. In total, twelve interviews with experts and 39 survey results from business stakeholders were gathered. Main results of empirical study helped to develop the measurement model of cost of poor quality and it was tied to incident priority matrix. Nevertheless, the model was created based on available data. Main conclusions of the thesis were that cost of poor quality measurements could be even further improved if additional data points could be used. New model takes into consideration different cost regions and utilizes on this notion.
Resumo:
The purpose of this study is to examine the impact of the choice of cut-off points, sampling procedures, and the business cycle on the accuracy of bankruptcy prediction models. Misclassification can result in erroneous predictions leading to prohibitive costs to firms, investors and the economy. To test the impact of the choice of cut-off points and sampling procedures, three bankruptcy prediction models are assessed- Bayesian, Hazard and Mixed Logit. A salient feature of the study is that the analysis includes both parametric and nonparametric bankruptcy prediction models. A sample of firms from Lynn M. LoPucki Bankruptcy Research Database in the U. S. was used to evaluate the relative performance of the three models. The choice of a cut-off point and sampling procedures were found to affect the rankings of the various models. In general, the results indicate that the empirical cut-off point estimated from the training sample resulted in the lowest misclassification costs for all three models. Although the Hazard and Mixed Logit models resulted in lower costs of misclassification in the randomly selected samples, the Mixed Logit model did not perform as well across varying business-cycles. In general, the Hazard model has the highest predictive power. However, the higher predictive power of the Bayesian model, when the ratio of the cost of Type I errors to the cost of Type II errors is high, is relatively consistent across all sampling methods. Such an advantage of the Bayesian model may make it more attractive in the current economic environment. This study extends recent research comparing the performance of bankruptcy prediction models by identifying under what conditions a model performs better. It also allays a range of user groups, including auditors, shareholders, employees, suppliers, rating agencies, and creditors' concerns with respect to assessing failure risk.
Resumo:
We study the dynamics of a game-theoretic network formation model that yields large-scale small-world networks. So far, mostly stochastic frameworks have been utilized to explain the emergence of these networks. On the other hand, it is natural to seek for game-theoretic network formation models in which links are formed due to strategic behaviors of individuals, rather than based on probabilities. Inspired by Even-Dar and Kearns (2007), we consider a more realistic model in which the cost of establishing each link is dynamically determined during the course of the game. Moreover, players are allowed to put transfer payments on the formation of links. Also, they must pay a maintenance cost to sustain their direct links during the game. We show that there is a small diameter of at most 4 in the general set of equilibrium networks in our model. Unlike earlier model, not only the existence of equilibrium networks is guaranteed in our model, but also these networks coincide with the outcomes of pairwise Nash equilibrium in network formation. Furthermore, we provide a network formation simulation that generates small-world networks. We also analyze the impact of locating players in a hierarchical structure by constructing a strategic model, where a complete b-ary tree is the seed network.
Resumo:
The purpose of the present study was to examine the role of the bystander in bullying situations. A cost/benefit model was explored in researching factors adolescents consider in deciding whether to intervene when witnessing bullying. Adolescents in the present study (N = 101 (50.5% female), between the ages of 12 to 18, M = 15.37 years; SD = 1.71 years) completed self-report questionnaires, and also responded to bullying scenarios, stating how the bystander would react, while explaining potential personal costs and benefits. Adolescents were able to articulate various personal costs and benefits when making the decision to intervene. Conclusions of the present study include: 1) the evolutionary approach is quite informative in illuminating the decision process of the bystander, 2) adolescents’ beliefs about bullying and the role of bystanders are different from their teachers’, and 3) the rather explicit cost/benefit model could be used to develop more targeted anti-bullying programs.
Resumo:
We reconsider the discrete version of the axiomatic cost-sharing model. We propose a condition of (informational) coherence requiring that not all informational refinements of a given problem be solved differently from the original problem. We prove that strictly coherent linear cost-sharing rules must be simple random-order rules.
Resumo:
Ce mémoire de maîtrise présente une nouvelle approche non supervisée pour détecter et segmenter les régions urbaines dans les images hyperspectrales. La méthode proposée n ́ecessite trois étapes. Tout d’abord, afin de réduire le coût calculatoire de notre algorithme, une image couleur du contenu spectral est estimée. A cette fin, une étape de réduction de dimensionalité non-linéaire, basée sur deux critères complémentaires mais contradictoires de bonne visualisation; à savoir la précision et le contraste, est réalisée pour l’affichage couleur de chaque image hyperspectrale. Ensuite, pour discriminer les régions urbaines des régions non urbaines, la seconde étape consiste à extraire quelques caractéristiques discriminantes (et complémentaires) sur cette image hyperspectrale couleur. A cette fin, nous avons extrait une série de paramètres discriminants pour décrire les caractéristiques d’une zone urbaine, principalement composée d’objets manufacturés de formes simples g ́eométriques et régulières. Nous avons utilisé des caractéristiques texturales basées sur les niveaux de gris, la magnitude du gradient ou des paramètres issus de la matrice de co-occurrence combinés avec des caractéristiques structurelles basées sur l’orientation locale du gradient de l’image et la détection locale de segments de droites. Afin de réduire encore la complexité de calcul de notre approche et éviter le problème de la ”malédiction de la dimensionnalité” quand on décide de regrouper des données de dimensions élevées, nous avons décidé de classifier individuellement, dans la dernière étape, chaque caractéristique texturale ou structurelle avec une simple procédure de K-moyennes et ensuite de combiner ces segmentations grossières, obtenues à faible coût, avec un modèle efficace de fusion de cartes de segmentations. Les expérimentations données dans ce rapport montrent que cette stratégie est efficace visuellement et se compare favorablement aux autres méthodes de détection et segmentation de zones urbaines à partir d’images hyperspectrales.
Resumo:
The purpose of this chapter is to provide an elementary introduction to the non-renewable resource model with multiple demand curves. The theoretical literature following Hotelling (1931) assumed that all energy needs are satisfied by one type of resource (e.g. ‘oil’), extractible at different per-unit costs. This formulation implicitly assumes that all users are the same distance from each resource pool, that all users are subject to the same regulations, and that motorist users can switch as easily from liquid fossil fuels to coal as electric utilities can. These assumptions imply, as Herfindahl (1967) showed, that in competitive equilibrium all users will exhaust a lower cost resource completely before beginning to extract a higher cost resource: simultaneous extraction of different grades of oil or of oil and coal should never occur. In trying to apply the single-demand curve model during the last twenty years, several teams of authors have independently found a need to generalize it to account for users differing in their (1) location, (2) regulatory environment, or (3) resource needs. Each research team found that Herfindahl's strong, unrealistic conclusion disappears in the generalized model; in its place, a weaker Herfindahl result emerges. Since each research team focussed on a different application, however, it has not always been clear that everyone has been describing the same generalized model. Our goal is to integrate the findings of these teams and to exposit the generalized model in a form which is easily accessible.
Resumo:
Le traitement chirurgical des anévrismes de l'aorte abdominale est de plus en plus remplacé par la réparation endovasculaire de l’anévrisme (« endovascular aneurysm repair », EVAR) en utilisant des endoprothèses (« stent-grafts », SGs). Cependant, l'efficacité de cette approche moins invasive est compromise par l'incidence de l'écoulement persistant dans l'anévrisme, appelé endofuites menant à une rupture d'anévrisme si elle n'est pas détectée. Par conséquent, une surveillance de longue durée par tomodensitométrie sur une base annuelle est nécessaire ce qui augmente le coût de la procédure EVAR, exposant le patient à un rayonnement ionisants et un agent de contraste néphrotoxique. Le mécanisme de rupture d'anévrisme secondaire à l'endofuite est lié à une pression du sac de l'anévrisme proche de la pression systémique. Il existe une relation entre la contraction ou l'expansion du sac et la pressurisation du sac. La pressurisation résiduelle de l'anévrisme aortique abdominale va induire une pulsation et une circulation sanguine à l'intérieur du sac empêchant ainsi la thrombose du sac et la guérison de l'anévrisme. L'élastographie vasculaire non-invasive (« non-invasive vascular elastography », NIVE) utilisant le « Lagrangian Speckle Model Estimator » (LSME) peut devenir une technique d'imagerie complémentaire pour le suivi des anévrismes après réparation endovasculaire. NIVE a la capacité de fournir des informations importantes sur l'organisation d'un thrombus dans le sac de l'anévrisme et sur la détection des endofuites. La caractérisation de l'organisation d'un thrombus n'a pas été possible dans une étude NIVE précédente. Une limitation de cette étude était l'absence d'examen tomodensitométrique comme étalon-or pour le diagnostic d'endofuites. Nous avons cherché à appliquer et optimiser la technique NIVE pour le suivi des anévrismes de l'aorte abdominale (AAA) après EVAR avec endoprothèse dans un modèle canin dans le but de détecter et caractériser les endofuites et l'organisation du thrombus. Des SGs ont été implantés dans un groupe de 18 chiens avec un anévrisme créé dans l'aorte abdominale. Des endofuites de type I ont été créés dans 4 anévrismes, de type II dans 13 anévrismes tandis qu’un anévrisme n’avait aucune endofuite. L'échographie Doppler (« Doppler ultrasound », DUS) et les examens NIVE ont été réalisés avant puis à 1 semaine, 1 mois, 3 mois et 6 mois après l’EVAR. Une angiographie, une tomodensitométrie et des coupes macroscopiques ont été réalisées au moment du sacrifice. Les valeurs de contrainte ont été calculées en utilisant l`algorithme LSME. Les régions d'endofuite, de thrombus frais (non organisé) et de thrombus solide (organisé) ont été identifiées et segmentées en comparant les résultats de la tomodensitométrie et de l’étude macroscopique. Les valeurs de contrainte dans les zones avec endofuite, thrombus frais et organisé ont été comparées. Les valeurs de contrainte étaient significativement différentes entre les zones d'endofuites, les zones de thrombus frais ou organisé et entre les zones de thrombus frais et organisé. Toutes les endofuites ont été clairement caractérisées par les examens d'élastographie. Aucune corrélation n'a été trouvée entre les valeurs de contrainte et le type d'endofuite, la pression de sac, la taille des endofuites et la taille de l'anévrisme.
Resumo:
Contexte: La régurgitation mitrale (RM) est une maladie valvulaire nécessitant une intervention dans les cas les plus grave. Une réparation percutanée de la valve mitrale avec le dispositif MitraClip est un traitement sécuritaire et efficace pour les patients à haut risque chirurgical. Nous voulons évaluer les résultats cliniques et l'impact économique de cette thérapie par rapport à la gestion médicale des patients en insuffisance cardiaque avec insuffisance mitrale symptomatique. Méthodes: L'étude a été composée de deux phases; une étude d'observation de patients souffrant d'insuffisance cardiaque et de régurgitation mitrale traitée avec une thérapie médicale ou le MitraClip, et un modèle économique. Les résultats de l'étude observationnelle ont été utilisés pour estimer les paramètres du modèle de décision, qui a estimé les coûts et les avantages d'une cohorte hypothétique de patients atteints d'insuffisance cardiaque et insuffisance mitrale sévère traitée avec soit un traitement médical standard ou MitraClip. Résultats: La cohorte de patients traités avec le système MitraClip était appariée par score de propension à une population de patients atteints d'insuffisance cardiaque, et leurs résultats ont été comparés. Avec un suivi moyen de 22 mois, la mortalité était de 21% dans la cohorte MitraClip et de 42% dans la cohorte de gestion médicale (p = 0,007). Le modèle de décision a démontré que MitraClip augmente l'espérance de vie de 1,87 à 3,60 années et des années de vie pondérées par la qualité (QALY) de 1,13 à 2,76 ans. Le coût marginal était 52.500 $ dollars canadiens, correspondant à un rapport coût-efficacité différentiel (RCED) de 32,300.00 $ par QALY gagné. Les résultats étaient sensibles à l'avantage de survie. Conclusion: Dans cette cohorte de patients atteints d'insuffisance cardiaque symptomatique et d insuffisance mitrale significative, la thérapie avec le MitraClip est associée à une survie supérieure et est rentable par rapport au traitement médical.