936 resultados para cumulative impact assessment
Resumo:
Characterizing the geological features and structures in three dimensions over inaccessible rock cliffs is needed to assess natural hazards such as rockfalls and rockslides and also to perform investigations aimed at mapping geological contacts and building stratigraphy and fold models. Indeed, the detailed 3D data, such as LiDAR point clouds, allow to study accurately the hazard processes and the structure of geologic features, in particular in vertical and overhanging rock slopes. Thus, 3D geological models have a great potential of being applied to a wide range of geological investigations both in research and applied geology projects, such as mines, tunnels and reservoirs. Recent development of ground-based remote sensing techniques (LiDAR, photogrammetry and multispectral / hyperspectral images) are revolutionizing the acquisition of morphological and geological information. As a consequence, there is a great potential for improving the modeling of geological bodies as well as failure mechanisms and stability conditions by integrating detailed remote data. During the past ten years several large rockfall events occurred along important transportation corridors where millions of people travel every year (Switzerland: Gotthard motorway and railway; Canada: Sea to sky highway between Vancouver and Whistler). These events show that there is still a lack of knowledge concerning the detection of potential rockfalls, making mountain residential settlements and roads highly risky. It is necessary to understand the main factors that destabilize rocky outcrops even if inventories are lacking and if no clear morphological evidences of rockfall activity are observed. In order to increase the possibilities of forecasting potential future landslides, it is crucial to understand the evolution of rock slope stability. Defining the areas theoretically most prone to rockfalls can be particularly useful to simulate trajectory profiles and to generate hazard maps, which are the basis for land use planning in mountainous regions. The most important questions to address in order to assess rockfall hazard are: Where are the most probable sources for future rockfalls located? What are the frequencies of occurrence of these rockfalls? I characterized the fracturing patterns in the field and with LiDAR point clouds. Afterwards, I developed a model to compute the failure mechanisms on terrestrial point clouds in order to assess the susceptibility to rockfalls at the cliff scale. Similar procedures were already available to evaluate the susceptibility to rockfalls based on aerial digital elevation models. This new model gives the possibility to detect the most susceptible rockfall sources with unprecented detail in the vertical and overhanging areas. The results of the computation of the most probable rockfall source areas in granitic cliffs of Yosemite Valley and Mont-Blanc massif were then compared to the inventoried rockfall events to validate the calculation methods. Yosemite Valley was chosen as a test area because it has a particularly strong rockfall activity (about one rockfall every week) which leads to a high rockfall hazard. The west face of the Dru was also chosen for the relevant rockfall activity and especially because it was affected by some of the largest rockfalls that occurred in the Alps during the last 10 years. Moreover, both areas were suitable because of their huge vertical and overhanging cliffs that are difficult to study with classical methods. Limit equilibrium models have been applied to several case studies to evaluate the effects of different parameters on the stability of rockslope areas. The impact of the degradation of rockbridges on the stability of large compartments in the west face of the Dru was assessed using finite element modeling. In particular I conducted a back-analysis of the large rockfall event of 2005 (265'000 m3) by integrating field observations of joint conditions, characteristics of fracturing pattern and results of geomechanical tests on the intact rock. These analyses improved our understanding of the factors that influence the stability of rock compartments and were used to define the most probable future rockfall volumes at the Dru. Terrestrial laser scanning point clouds were also successfully employed to perform geological mapping in 3D, using the intensity of the backscattered signal. Another technique to obtain vertical geological maps is combining triangulated TLS mesh with 2D geological maps. At El Capitan (Yosemite Valley) we built a georeferenced vertical map of the main plutonio rocks that was used to investigate the reasons for preferential rockwall retreat rate. Additional efforts to characterize the erosion rate were made at Monte Generoso (Ticino, southern Switzerland) where I attempted to improve the estimation of long term erosion by taking into account also the volumes of the unstable rock compartments. Eventually, the following points summarize the main out puts of my research: The new model to compute the failure mechanisms and the rockfall susceptibility with 3D point clouds allows to define accurately the most probable rockfall source areas at the cliff scale. The analysis of the rockbridges at the Dru shows the potential of integrating detailed measurements of the fractures in geomechanical models of rockmass stability. The correction of the LiDAR intensity signal gives the possibility to classify a point cloud according to the rock type and then use this information to model complex geologic structures. The integration of these results, on rockmass fracturing and composition, with existing methods can improve rockfall hazard assessments and enhance the interpretation of the evolution of steep rockslopes. -- La caractérisation de la géologie en 3D pour des parois rocheuses inaccessibles est une étape nécessaire pour évaluer les dangers naturels tels que chutes de blocs et glissements rocheux, mais aussi pour réaliser des modèles stratigraphiques ou de structures plissées. Les modèles géologiques 3D ont un grand potentiel pour être appliqués dans une vaste gamme de travaux géologiques dans le domaine de la recherche, mais aussi dans des projets appliqués comme les mines, les tunnels ou les réservoirs. Les développements récents des outils de télédétection terrestre (LiDAR, photogrammétrie et imagerie multispectrale / hyperspectrale) sont en train de révolutionner l'acquisition d'informations géomorphologiques et géologiques. Par conséquence, il y a un grand potentiel d'amélioration pour la modélisation d'objets géologiques, ainsi que des mécanismes de rupture et des conditions de stabilité, en intégrant des données détaillées acquises à distance. Pour augmenter les possibilités de prévoir les éboulements futurs, il est fondamental de comprendre l'évolution actuelle de la stabilité des parois rocheuses. Définir les zones qui sont théoriquement plus propices aux chutes de blocs peut être très utile pour simuler les trajectoires de propagation des blocs et pour réaliser des cartes de danger, qui constituent la base de l'aménagement du territoire dans les régions de montagne. Les questions plus importantes à résoudre pour estimer le danger de chutes de blocs sont : Où se situent les sources plus probables pour les chutes de blocs et éboulement futurs ? Avec quelle fréquence vont se produire ces événements ? Donc, j'ai caractérisé les réseaux de fractures sur le terrain et avec des nuages de points LiDAR. Ensuite, j'ai développé un modèle pour calculer les mécanismes de rupture directement sur les nuages de points pour pouvoir évaluer la susceptibilité au déclenchement de chutes de blocs à l'échelle de la paroi. Les zones sources de chutes de blocs les plus probables dans les parois granitiques de la vallée de Yosemite et du massif du Mont-Blanc ont été calculées et ensuite comparés aux inventaires des événements pour vérifier les méthodes. Des modèles d'équilibre limite ont été appliqués à plusieurs cas d'études pour évaluer les effets de différents paramètres sur la stabilité des parois. L'impact de la dégradation des ponts rocheux sur la stabilité de grands compartiments de roche dans la paroi ouest du Petit Dru a été évalué en utilisant la modélisation par éléments finis. En particulier j'ai analysé le grand éboulement de 2005 (265'000 m3), qui a emporté l'entier du pilier sud-ouest. Dans le modèle j'ai intégré des observations des conditions des joints, les caractéristiques du réseau de fractures et les résultats de tests géoméchaniques sur la roche intacte. Ces analyses ont amélioré l'estimation des paramètres qui influencent la stabilité des compartiments rocheux et ont servi pour définir des volumes probables pour des éboulements futurs. Les nuages de points obtenus avec le scanner laser terrestre ont été utilisés avec succès aussi pour produire des cartes géologiques en 3D, en utilisant l'intensité du signal réfléchi. Une autre technique pour obtenir des cartes géologiques des zones verticales consiste à combiner un maillage LiDAR avec une carte géologique en 2D. A El Capitan (Yosemite Valley) nous avons pu géoréferencer une carte verticale des principales roches plutoniques que j'ai utilisé ensuite pour étudier les raisons d'une érosion préférentielle de certaines zones de la paroi. D'autres efforts pour quantifier le taux d'érosion ont été effectués au Monte Generoso (Ticino, Suisse) où j'ai essayé d'améliorer l'estimation de l'érosion au long terme en prenant en compte les volumes des compartiments rocheux instables. L'intégration de ces résultats, sur la fracturation et la composition de l'amas rocheux, avec les méthodes existantes permet d'améliorer la prise en compte de l'aléa chute de pierres et éboulements et augmente les possibilités d'interprétation de l'évolution des parois rocheuses.
Resumo:
Two enoxaparin dosage regimens are used as comparators to evaluate new anticoagulants for thromboprophylaxis in patients undergoing major orthopaedic surgery, but so far no satisfactory direct comparison between them has been published. Our objective was to compare the efficacy and safety of enoxaparin 3,000 anti-Xa IU twice daily and enoxaparin 4,000 anti-Xa IU once daily in this clinical setting by indirect comparison meta-analysis, using Bucher's method. We selected randomised controlled trials comparing another anticoagulant, placebo (or no treatment) with either enoxaparin regimen for venous thromboembolism prophylaxis after hip or knee replacement or hip fracture surgery, provided that the second regimen was assessed elsewhere versus the same comparator. Two authors independently evaluated study eligibility, extracted the data, and assessed the risk of bias. The primary efficacy outcome was the incidence of venous thomboembolism. The main safety outcome was the incidence of major bleeding. Overall, 44 randomised comparisons in 56,423 patients were selected, 35 being double-blind (54,117 patients). Compared with enoxaparin 4,000 anti-Xa IU once daily, enoxaparin 3,000 anti-Xa IU twice daily was associated with a reduced risk of venous thromboembolism (relative risk [RR]: 0.53, 95% confidence interval [CI]: 0.40 to 0.69), but an increased risk of major bleeding (RR: 2.01, 95% CI: 1.23 to 3.29). In conclusion, when interpreting the benefit-risk ratio of new anticoagulant drugs versus enoxaparin for thromboprophylaxis after major orthopaedic surgery, the apparently greater efficacy but higher bleeding risk of the twice-daily 3,000 anti-Xa IU enoxaparin regimen compared to the once-daily 4,000 anti-Xa IU regimen should be taken into account.
Resumo:
Through indisputable evidence of climate change and its link to the greenhouse gas emissions comes the necessity for change in energy production infrastructure during the coming decades. Through political conventions and restrictions energy industry is pushed toward using bigger share of renewable energy sources as energy supply. In addition to climate change, sustainable energy supply is another major issue for future development plans, but neither of these should come with unbearable price. All the power production types have environmental effects as well as strengths and weaknesses. Although each change comes with a price, right track in minimising the environmental impacts and energy supply security can be found by combining all possible low-carbon technologies and by improving energy efficiency in all sectors, for creating a new power production infrastructure of tolerable energy price and of minor environmental effects. GEMIS-Global Emission Model for Integrated Systems is a life-cycle analysis program which was used in this thesis to make indicative energy models for Finland’s future energy supply. Results indicate that the energy supply must comprise both high capacity nuclear power as well as large variation of renewable energy sources for minimization of all environmental effects and keeping energy price reasonable.
Resumo:
Global warming mitigation has recently become a priority worldwide. A large body of literature dealing with energy related problems has focused on reducing greenhouse gases emissions at an engineering scale. In contrast, the minimization of climate change at a wider macroeconomic level has so far received much less attention. We investigate here the issue of how to mitigate global warming by performing changes in an economy. To this end, we make use of a systematic tool that combines three methods: linear programming, environmentally extended input output models, and life cycle assessment principles. The problem of identifying key economic sectors that contribute significantly to global warming is posed in mathematical terms as a bi criteria linear program that seeks to optimize simultaneously the total economic output and the total life cycle CO2 emissions. We have applied this approach to the European Union economy, finding that significant reductions in global warming potential can be attained by regulating specific economic sectors. Our tool is intended to aid policymakers in the design of more effective public policies for achieving the environmental and economic targets sought.
Resumo:
In the assessment of social impact caused by meteorological events, factors of different natures need to be considered. Not only does hazard itself determine the impact that a severe weather event has on society, but also other features related to vulnerability and exposure. The requests of data related to insurance claims received in meteorological services proved to be a good indicator of the social impact that a weather event causes, according to studies carried out by the Social Impact Research Group, created within the framework of the MEDEX project. Taking these requests as proxy data, diverse aspects connected to the impact of heavy rain events have been studied. The rainfall intensity, in conjunction with the population density, has established itself as one of the key factors in social impact studies. One of the conclusions we obtained is that various thresholds of rainfall should be applied for areas of varying populations. In this study, the role of rainfall intensity has been analysed for a highly populated urban area like Barcelona. A period without significant population changes has been selected for the study to minimise the effects linked to vulnerability and exposure modifications. First, correlations between rainfall recorded in different time intervals and requests were carried out. Afterwards, a method to include the intensity factor in the social impact index was suggested based on return periods given by intensity duration frequency (IDF) curves.
Resumo:
This paper analyses how fiscal adjustment comes about when both central and sub-national governments are involved in consolidation. We test sustainability of public debt with a fiscal rule for both the federal and regional government. Results for the German Länder show that lower tier governments bear a relatively smaller part of the burden of debt consolidation, if they consolidate at all. Most of the fiscal adjustment occurs via central government debt. In contrast, both the US federal and state levels contribute to consolidation of public finances.
Resumo:
Valtimotautiriskin arviointi verenpainepotilailla Valtimotaudit ovat yleisin kuolinsyy koko maailmassa. Väestön elintapojen muuttuminen ja ikääntyminen uhkaavat edelleen lisätä valtimotautien esiintyvyyttä. Kokemäenjokilaakson valtimotautien ehkäisyprojektin tavoitteena oli löytää 45–70-vuotiaasta väestöstä henkilöt, joilla on kohonnut riski sairastua valtimotauteihin. Kaksivaiheisen seulontamenetelmän avulla voitiin terveydenhoitajan antama elintapaneuvonta kohdistaa riskihenkilöihin ja rajoittaa lääkärin vastaanoton tarve niihin potilaisiin, jotka todennäköisesti hyötyvät ennaltaehkäisevästä lääkityksestä. Suomalainen tyypin 2 diabeteksen sairastumisriskin arviointikaavake ja hoitajan toteama kohonnut verenpaine osoittautuivat käytännöllisiksi menetelmiksi seuloa väestöstä riskihenkilöitä. Valtimotautien ehkäisyprojektissa Harjavallassa ja Kokemäellä todettiin verenpainetauti 1 106 henkilöllä, jotka eivät sairastaneet valtimotautia tai aiemmin todettua diabetesta. Heidän tutkimustulostensa avulla voidaan arvioida kohonneen verenpaineen vaikutusta sokeriaineenvaihduntaan ja verenpaineen aiheuttamiin kohde-elinvaurioihin. Sokeriaineenvaihdunnan häiriöt ovat verenpainetautia sairastavilla yleisempiä kuin väestössä muutoin. Käyttämällä metabolisen oireyhtymän kriteerejä sokerirasituskokeen suorittamisen edellytyksenä voidaan tutkimusten määrää vähentää kolmanneksella ja silti löytää lähes kaikki diabetesta tai sen esiastetta sairastavat verenpainepotilaat. Verenpainepotilaista etenkin metabolista oireyhtymää sairastavilla naisilla on suurentunut munuaisten vajaatoiminnan riski. Jos verenpainepotilaan munuaisten toimintaa arvioidaan pelkästään plasman kreatiniini -arvon perusteella, kolme neljästä munuaisten vajaatoimintaa potevasta jää toteamatta verrattuna laskennallisen glomerulusten suodattumisnopeuden määritykseen seulontamenetelmänä. Joka kolmannella verenpainetautia sairastavalla voidaan todeta alaraajavaltimoiden kovettumista; useammin niillä, joiden ylä- ja alaverenpaineen erotus, pulssipaine on yli 65 mmHg. Verenpainetauti on itsenäinen perifeerisen valtimotaudin vaaratekijä. Tutkimuksessa käytetty menetelmä nilkka-olkavarsipainesuhteen määrittämiseksi soveltunee hyvin perusterveydenhuollon käyttöön riskihenkilöiden löytämiseksi. Valtimotautien kokonaisriskin arviointimenetelmät tai uuden riskitekijän, herkän C-reaktiivisen proteiinin määritys eivät voi korvata kohde-elinvaurioiden mittaamista verenpainepotilaan valtimotautiriskin huolellisessa arvioinnissa.
Resumo:
Today, environmental impact associated with pollution treatment is a matter of great concern. A method is proposed for evaluating environmental risk associated with Advanced Oxidation Processes (AOPs) applied to wastewater treatment. The method is based on the type of pollution (wastewater, solids, air or soil) and on materials and energy consumption. An Environmental Risk Index (E), constructed from numerical criteria provided, is presented for environmental comparison of processes and/or operations. The Operation Environmental Risk Index (EOi) for each of the unit operations involved in the process and the Aspects Environmental Risk Index (EAj) for process conditions were also estimated. Relative indexes were calculated to evaluate the risk of each operation (E/NOP) or aspect (E/NAS) involved in the process, and the percentage of the maximum achievable for each operation and aspect was found. A practical application of the method is presented for two AOPs: photo-Fenton and heterogeneous photocatalysis with suspended TiO2 in Solarbox. The results report the environmental risks associated with each process, so that AOPs tested and the operations involved with them can be compared.
Resumo:
Immaturity of the gut barrier system in the newborn has been seen to underlie a number of chronic diseases originating in infancy and manifesting later in life. The gut microbiota and breast milk provide the most important maturing signals for the gut-related immune system and reinforcement of the gut mucosal barrier function. Recently, the composition of the gut microbiota has been proposed to be instrumental in control of host body weight and metabolism as well as the inflammatory state characterizing overweight and obesity. On this basis, inflammatory Western lifestyle diseases, including overweight development, may represent a potential target for probiotic interventions beyond the well documented clinical applications. The purpose of the present undertaking was to study the efficacy and safety of perinatal probiotic intervention. The material comprised two ongoing, prospective, double-blind NAMI (Nutrition, Allergy, Mucosal immunology and Intestinal microbiota) probiotic interventions. In the mother-infant nutrition and probiotic study altogether 256 women were randomized at their first trimester of pregnancy into a dietary intervention and a control group. The intervention group received intensive dietary counselling provided by a nutritionist, and were further randomized at baseline, double-blind, to receive probiotics (Lactobacillus rhamnosus GG and Bifidobacterium lactis) or placebo. The intervention period extended from the first trimester of pregnancy to the end of exclusive breastfeeding. In the allergy prevention study altogether 159 women were randomized, double-blind, to receive probiotics (Lactobacillus rhamnosus GG) or placebo 4 weeks before expected delivery, the intervention extending for 6 months postnatally. Additionally, patient data on all premature infants with very low birth weight (VLBW) treated in the Department of Paediatrics, Turku University Hospital, during the years 1997 - 2008 were utilized. The perinatal probiotic intervention reduced the risk of gestational diabetes mellitus (GDM) in the mothers and perinatal dietary counselling reduced that of fetal overgrowth in GDM-affected pregnancies. Early gut microbiota modulation with probiotics modified the growth pattern of the child by restraining excessive weight gain during the first years of life. The colostrum adiponectin concentration was demonstrated to be dependent on maternal diet and nutritional status during pregnancy. It was also higher in the colostrum received by normal-weight compared to overweight children at the age of 10 years. The early perinatal probiotic intervention and the postnatal probiotic intervention in VLBW infants were shown to be safe. To conclude, the findings in this study provided clinical evidence supporting the involvement of the initial microbial and nutritional environment in metabolic programming of the child. The manipulation of early gut microbial communities with probiotics might offer an applicable strategy to impact individual energy homeostasis and thus to prevent excessive body-weight gain. The results add weight to the hypothesis that interventions aiming to prevent obesity and its metabolic consequences later in life should be initiated as early as during the perinatal period.
Resumo:
The skill of programming is a key asset for every computer science student. Many studies have shown that this is a hard skill to learn and the outcomes of programming courses have often been substandard. Thus, a range of methods and tools have been developed to assist students’ learning processes. One of the biggest fields in computer science education is the use of visualizations as a learning aid and many visualization based tools have been developed to aid the learning process during last few decades. Studies conducted in this thesis focus on two different visualizationbased tools TRAKLA2 and ViLLE. This thesis includes results from multiple empirical studies about what kind of effects the introduction and usage of these tools have on students’ opinions and performance, and what kind of implications there are from a teacher’s point of view. The results from studies in this thesis show that students preferred to do web-based exercises, and felt that those exercises contributed to their learning. The usage of the tool motivated students to work harder during their course, which was shown in overall course performance and drop-out statistics. We have also shown that visualization-based tools can be used to enhance the learning process, and one of the key factors is the higher and active level of engagement (see. Engagement Taxonomy by Naps et al., 2002). The automatic grading accompanied with immediate feedback helps students to overcome obstacles during the learning process, and to grasp the key element in the learning task. These kinds of tools can help us to cope with the fact that many programming courses are overcrowded with limited teaching resources. These tools allows us to tackle this problem by utilizing automatic assessment in exercises that are most suitable to be done in the web (like tracing and simulation) since its supports students’ independent learning regardless of time and place. In summary, we can use our course’s resources more efficiently to increase the quality of the learning experience of the students and the teaching experience of the teacher, and even increase performance of the students. There are also methodological results from this thesis which contribute to developing insight into the conduct of empirical evaluations of new tools or techniques. When we evaluate a new tool, especially one accompanied with visualization, we need to give a proper introduction to it and to the graphical notation used by tool. The standard procedure should also include capturing the screen with audio to confirm that the participants of the experiment are doing what they are supposed to do. By taken such measures in the study of the learning impact of visualization support for learning, we can avoid drawing false conclusion from our experiments. As computer science educators, we face two important challenges. Firstly, we need to start to deliver the message in our own institution and all over the world about the new – scientifically proven – innovations in teaching like TRAKLA2 and ViLLE. Secondly, we have the relevant experience of conducting teaching related experiment, and thus we can support our colleagues to learn essential know-how of the research based improvement of their teaching. This change can transform academic teaching into publications and by utilizing this approach we can significantly increase the adoption of the new tools and techniques, and overall increase the knowledge of best-practices. In future, we need to combine our forces and tackle these universal and common problems together by creating multi-national and multiinstitutional research projects. We need to create a community and a platform in which we can share these best practices and at the same time conduct multi-national research projects easily.
Resumo:
To describe the change of purchasing moving from administrative to strategic function academics have put forward maturity models which help practitioners to compare their purchasing activities to industry top performers and best practices. However, none of the models aim to distinguish the purchasing maturity from the after-sales point of view, even though after-sales activities are acknowledged as a relevant source of revenue, profit and competitive advantage in most manufacturing firms. The maturity of purchasing and supply management practices have a large impact to the overall performance of the spare parts supply chain and ultimately to the value creation and relationship building for the end customer. The research was done as a case study for a European after-sales organization which is part of a globally operating industrial firm specialized in heavy machinery. The study mapped the current state of the purchasing practices in the case organization and also distinguished the relevant areas for future development. The study was based on the purchasing maturity model developed by Schiele (2007) and investigated also how applicable is the maturity model in the spare parts supply chain context. Data for the assessment was gathered using five expert interviews inside the case organization and other parties involved in the company’s spare parts supply chain. Inventory management dimension was added to the original maturity model in order to better capture the important areas in a spare parts supply chain. The added five questions were deduced from the spare parts management literature and verified as relevant areas by the case organization’s personnel. Results indicate that largest need for development in the case organization are: better collaboration between sourcing and operative procurement functions, use of installed base information in the spare parts management, training plan development for new buyers, assessment of aligned KPI’s between the supply chain parties and better defining the role of after-sales sourcing. The purchasing maturity model used in this research worked well in H&R Leading, Controlling and Inventory Management dimensions. The assessment was more difficult to conduct in the Supplier related processes, Process integration and Organizational structure –dimensions, mainly because the assessment in these sections would for some parts require more company-wide assessment. Results indicate also that the purchasing maturity model developed by Schiele (2007) captures the relevant areas in the spare parts supply as well.
Resumo:
The topic of this Master’s Thesis is risk assessment in the supply chain, and the work was done for a company operating in the pharmaceutical industry. The unique features of the industry bring additional challenges to risk management, due to high regulatory, docu-mentation and traceability requirements. The objective of the thesis was to generate a template for assessing the risks in the supply chain of current and potential suppliers of the case company. Risks pertaining to the case setting were sought mainly from in-house expertise of this specific product and supply chain as well as academic research papers and theory on risk management. A questionnaire was set up to assess the found risks on impact, occurrence and possibility of detection. Through this classification of the severity of the risks, the supplier assessment template was formed. A questionnaire template, comprised of the top 10 risks affecting the flow of information and materials in this setting, was formulated to serve as a generic tool for assessing risks in the supply chain of a pharmaceutical company. The template was tested on another supplier for usability and accuracy of found risks, and it demonstrated functioning in a differing supply chain and product setting.
Resumo:
Understanding how firms create, communicate, and deliver value to customers is a key factor when firms seek to differentiate in increasingly competitive and commoditized business markets. As product and price have become less important differentiators in many industries, suppliers are increasingly seeking ways to differentiate themselves based on delivered customer value. Therefore, to gain a holistic understanding on what their offerings are worth to the customer, suppliers need to conduct customer value assessment, which quantifies the impact of a supplier´s offering to customers’ costs and returns. However, from a managerial perspective, customer value assessment is the single most critical challenge for firms in business markets. Consequently, developing holistic frameworks for customer value assessment is seen as one of the most important research priorities for marketing research. The purpose of this study is to explore the process of customer value assessment in business markets. Business markets represent a context where an increasing number of industrial firms are transitioning from basic product offerings towards service-based and solution-oriented hybrid offerings, which emphasize value co-creation and realization in the long term, thus making it difficult to quantify their monetary value. This study employs exploratory and qualitative research design by applying inductive and discovery-oriented grounded theory and multiple case research methods. The empirical data comprise interviews with 61 managers from 12 industrial firms, including seven best practice firms in customer value assessment. The findings of this study show that customer value assessment is essentially a crossfunctional process, which involves several organizational functions. The process begins well before and continues long after the actual delivery, often until the end of a supplier´s offering’s life-cycle. Furthermore, the findings shed light on alternative strategies that firms in business markets can adopt to implement the customer value assessment process. Overall, the findings contribute to customer value research, the sales and organizational management literature, the service marketing and solutions business literature, and suggest several managerial implications on how firms in business markets can adopt a holistic approach to assess value created for customers.
Resumo:
With the increasing concern of the sustainable approach of gold mining, thiosulphate has been researched as an alternative lixiviant to cyanide since cyanide is toxic to the environment. In order to investigate the possibility of thiosulphate leaching application in the coming future, life cycle assessment, is conducted to compare the environmental footprint of cyanidation and thiosulphate leaching. The result showed the most significant environmental impact of cyanidation is toxicity to human, while the ammonia of thiosulphate leaching is also a major concern of acidification. In addition, an ecosystem evaluation is also performed to indicate the potential damages caused by an example of cyanide spill at Kittilä mine, resulting in significant environmental risk cost that has to be taken into account for decision making. From the opinion collected from an online LinkedIn discussion forum, the anxiety of sustainability alone would not be enough to contribute a significant change of conventional cyanidation, until the tighten policy of cyanide use. International Cyanide Code, therefore, is crucial for safe gold production. Nevertheless, it is still thoughtful to consider the values of healthy ecosystem and the gold for long-term benefit.
Resumo:
Ventricular late potentials are low-amplitude signals originating from damaged myocardium and detected on the body surface by ECG filtering and averaging. Digital filters present in commercial equipment may interfere with the ability of arrhythmia stratification. We compared 40-Hz BiSpec (BI) and classical 40- to 250-Hz band-pass Butterworth bidirectional (BD) filters in terms of impact on time domain variables and diagnostic properties. In a transverse retrospective age-adjusted case-control study, 221 subjects with sinus rhythm without bundle branch block were divided into three groups after signal-averaged ECG acquisition: GI (N = 40), clinically normal controls, GII (N = 158), subjects with coronary heart disease without sustained monomorphic ventricular tachycardia (SMVT), and GIII (N = 23), subjects with heart disease and documented SMVT. Conventional variables analyzed from vector magnitude data after averaging to 0.3 µV final noise were obtained by application of each filter to the averaged signal, and evaluated in pairs by numerical comparison and by diagnostic agreement assessment, using conventional and optimized thresholds of normality. Significant differences were found between BI and BD variables in all groups, with diagnostic results showing significant disagreement between both filters [kappa value of 0.61 (P<0.05) for GII and 0.31 for GIII (P = NS)]. Sensitivity for SMVT was lower with BI than with BD (65.2 vs 91.3%, respectively, P<0.05). Filters provided significantly different numerical and diagnostic results and the BI filter showed only limited clinical application to risk stratification of ventricular arrhythmia.