230 resultados para value-passing
Resumo:
A key concern in the field of contemporary fashion/textiles design is the emergence of ‘fast fashion’: best explained as "buy it Friday, wear it Saturday and throw it away on Sunday" (O'Loughlin, 2007). In this contemporary retail atmosphere of “pile it high: sell it cheap” and “quick to market”, even designer goods have achieved a throwaway status. This modern culture of consumerism is the antithesis of sustainability and is proving a dilemma surrounding sustainable practice for designers and producers in the disciplines (de Blas, 2010). Design researchers including those in textiles/fashion have begun to explore what is a key question in the 21st century in order to create a vision and reason for their disciplines: Can products be designed to have added value to the consumer and hence contribute to a more sustainable industry? Fashion Textiles Design has much to answer for in contributing to the problems of unsustainable practices on a global scale in design, production and waste. However, designers within this field also have great potential to contribute to practical ‘real world’ solutions. ----- ----- This paper provides an overview of some of the design and technological developments from the fashion/textiles industry, endorsing a model where designers and technicians use their transferrable skills for wellbeing rather than desire. Smart materials in the form of responsive and adaptive fibres and fabrics combined with electro active devices, and ICT are increasingly shaping many aspects of society particularly in the leisure industry and interactive consumer products are ever more visible in healthcare. Combinations of biocompatible delivery devices with bio sensing elements can create analyse, sense and actuate early warning and monitoring systems which can be linked to data logging and patient records via intelligent networks. Patient sympathetic, ‘smart’ fashion/textiles applications based on interdisciplinary expertise utilising textiles design and technology is emerging. An analysis of a series of case studies demonstrates the potential of fashion textiles design practitioners to exploit the concept of value adding through technological garment and textiles applications and enhancement for health and wellbeing and in doing so contribute to a more sustainable future fashion/textiles design industry.
Resumo:
Proposed transmission smart grids will use a digital platform for the automation of substations operating at voltage levels of 110 kV and above. The IEC 61850 series of standards, released in parts over the last ten years, provide a specification for substation communications networks and systems. These standards, along with IEEE Std 1588-2008 Precision Time Protocol version 2 (PTPv2) for precision timing, are recommended by the both IEC Smart Grid Strategy Group and the NIST Framework and Roadmap for Smart Grid Interoperability Standards for substation automation. IEC 61850-8-1 and IEC 61850-9-2 provide an inter-operable solution to support multi-vendor digital process bus solutions, allowing for the removal of potentially lethal voltages and damaging currents from substation control rooms, a reduction in the amount of cabling required in substations, and facilitates the adoption of non-conventional instrument transformers (NCITs). IEC 61850, PTPv2 and Ethernet are three complementary protocol families that together define the future of sampled value digital process connections for smart substation automation. This paper describes a specific test and evaluation system that uses real time simulation, protection relays, PTPv2 time clocks and artificial network impairment that is being used to investigate technical impediments to the adoption of SV process bus systems by transmission utilities. Knowing the limits of a digital process bus, especially when sampled values and NCITs are included, will enable utilities to make informed decisions regarding the adoption of this technology.
Resumo:
One of Cultural Studies' most important contributions to academic thinking about culture is the acceptance as axiomatic that we must not simply accept traditional value hierarchies in relation to cultural objects (see, for example, McGuigan, 1992: 157; Brunsdon, 1997: 5; Wark, 2001). Since Richard Hoggart and Raymond Williams took popular culture as a worthy object of study, Cultural Studies practitioners have accepted that the terms in which cultural debate had previously been conducted involved a category error. Opera is not 'better' than pop music, we believe in Cultural Studies - 'better for what?', we would ask. Similarly, Shakespeare is not 'better' than Mills and Boon, unless you can specify the purpose for which you want to use the texts. Shakespeare is indeed better than Mills and Boon for understanding seventeenth century ideas about social organisation; but Mills and Boon is unquestionably better than Shakespeare if you want slightly scandalous, but ultimately reassuring representations of sexual intercourse. The reason that we do not accept traditional hierarchies of cultural value is that we know that the culture that is commonly understood to be 'best' also happens to be that which is preferred by the most educated and most materially well-off people in any given culture (Bourdieu, 1984: 1- 2; Ross, 1989: 211). We can interpret this information in at least two ways. On the one hand, it can be read as proving that the poorer and less well-educated members of a society do indeed have tastes which are innately less worthwhile than those of the material and educational elite. On the other hand, this information can be interpreted as demonstrating that the cultural and material elite publicly represent their own tastes as being the only correct ones. In Cultural Studies, we tend to favour the latter interpretation. We reject the idea that cultural objects have innate value, in terms of beauty, truth, excellence, simply 'there' in the object. That is, we reject 'aesthetic' approaches to culture (Bourdieu, 1984: 6; 485; Hartley, 1994: 6)1. In this, Cultural Studies is similar to other postmodern institutions, where high and popular culture can be mixed in ways unfamiliar to modernist culture (Sim, 1992: 1; Jameson, 1998: 100). So far, so familiar.
Resumo:
A pervasive and puzzling feature of banks’ Value-at-Risk (VaR) is its abnormally high level, which leads to excessive regulatory capital. A possible explanation for the tendency of commercial banks to overstate their VaR is that they incompletely account for the diversification effect among broad risk categories (e.g., equity, interest rate, commodity, credit spread, and foreign exchange). By underestimating the diversification effect, bank’s proprietary VaR models produce overly prudent market risk assessments. In this paper, we examine empirically the validity of this hypothesis using actual VaR data from major US commercial banks. In contrast to the VaR diversification hypothesis, we find that US banks show no sign of systematic underestimation of the diversification effect. In particular, diversification effects used by banks is very close to (and quite often larger than) our empirical diversification estimates. A direct implication of this finding is that individual VaRs for each broad risk category, just like aggregate VaRs, are biased risk assessments.
Resumo:
In this paper we study both the level of Value-at-Risk (VaR) disclosure and the accuracy of the disclosed VaR figures for a sample of US and international commercial banks. To measure the level of VaR disclosures, we develop a VaR Disclosure Index that captures many different facets of market risk disclosure. Using panel data over the period 1996–2005, we find an overall upward trend in the quantity of information released to the public. We also find that Historical Simulation is by far the most popular VaR method. We assess the accuracy of VaR figures by studying the number of VaR exceedances and whether actual daily VaRs contain information about the volatility of subsequent trading revenues. Unlike the level of VaR disclosure, the quality of VaR disclosure shows no sign of improvement over time. We find that VaR computed using Historical Simulation contains very little information about future volatility.
Resumo:
Lignocellulosic waste materials are the most promising feedstock for generation of a renewable, carbon-neutral substitute for existing liquid fuels. The development of value-added products from lignin will greatly improve the economics of producing liquid fuels from biomass. This review gives an outline of lignin chemistry, describes the current processes of lignocellulosic biomass fractionation and the lignin products obtained through these processes, then outlines current and potential value-added applications of these products, in particular as components of polymer composites. Research highlights The use of lignocellulosic biomass to produce platform chemicals and industrial products enhances the sustainability of natural resources and improves environmental quality by reducing greenhouse and toxic emissions. In addition, the development of lignin based products improves the economics producing liquid transportation fuel from lignocellulosic feedstock. Value adding can be achieved by converting lignin to functionally equivalent products that rely in its intrinsic properties. This review outlines lignin chemistry and some potential high value products that can be made from lignin. Keywords: Lignocellulose materials; Lignin chemistry; Application
Resumo:
As part of a larger literature focused on identifying and relating the antecedents and consequences of diffusing organizational practices/ideas, recent research has debated the international adoption of a shareholder-value-orientation (SVO). The debate has financial economists characterizing the adoption of an SVO as performance-enhancing and thus inevitable, with behavioral scientists disputing both claims, invoking institutional differences. This study seeks to provide some resolution to the debate (and advance current understanding on the diffusion of practices/ideas) by developing a socio-political perspective that links the antecedents and consequences of an SVO. In particular, we introduce the notion of misaligned elites and misfitted practices in our analysis of how and why differences in the technical and cultural preferences of major owners will influence a firm’s adoption and (un)successful implementation of an SVO among the largest 100 corporations in the Netherlands from 1992-2006. We conclude with a discussion of the implications of our perspective and our findings for future research on corporate governance and the diffusion of organizational practices/ideas.
Resumo:
The literature abounds with descriptions of failures in high-profile projects and a range of initiatives has been generated to enhance project management practice (e.g., Morris, 2006). Estimating from our own research, there are scores of other project failures that are unrecorded. Many of these failures can be explained using existing project management theory; poor risk management, inaccurate estimating, cultures of optimism dominating decision making, stakeholder mismanagement, inadequate timeframes, and so on. Nevertheless, in spite of extensive discussion and analysis of failures and attention to the presumed causes of failure, projects continue to fail in unexpected ways. In the 1990s, three U.S. state departments of motor vehicles (DMV) cancelled major projects due to time and cost overruns and inability to meet project goals (IT-Cortex, 2010). The California DMV failed to revitalize their drivers’ license and registration application process after spending $45 million. The Oregon DMV cancelled their five year, $50 million project to automate their manual, paper-based operation after three years when the estimates grew to $123 million; its duration stretched to eight years or more and the prototype was a complete failure. In 1997, the Washington state DMV cancelled their license application mitigation project because it would have been too big and obsolete by the time it was estimated to be finished. There are countless similar examples of projects that have been abandoned or that have not delivered the requirements.
Resumo:
Numerous tools and techniques have been developed to eliminate or reduce waste and carry out lean concepts in the manufacturing environment. However, appropriate lean tools need to be selected and implemented in order to fulfil the manufacturer needs within their budgetary constraints. As a result, it is important to identify manufacturer needs and implement only those tools, which contribute maximum benefit to their needs. In this research a mathematical model is proposed for maximising the perceived value of manufacturer needs and developed a step-by-step methodology to select best performance metrics along with appropriate lean strategies within the budgetary constraints. With the help of a case study, the proposed model and method have been demonstrated.
Resumo:
Ethernet is a key component of the standards used for digital process buses in transmission substations, namely IEC 61850 and IEEE Std 1588-2008 (PTPv2). These standards use multicast Ethernet frames that can be processed by more than one device. This presents some significant engineering challenges when implementing a sampled value process bus due to the large amount of network traffic. A system of network traffic segregation using a combination of Virtual LAN (VLAN) and multicast address filtering using managed Ethernet switches is presented. This includes VLAN prioritisation of traffic classes such as the IEC 61850 protocols GOOSE, MMS and sampled values (SV), and other protocols like PTPv2. Multicast address filtering is used to limit SV/GOOSE traffic to defined subsets of subscribers. A method to map substation plant reference designations to multicast address ranges is proposed that enables engineers to determine the type of traffic and location of the source by inspecting the destination address. This method and the proposed filtering strategy simplifies future changes to the prioritisation of network traffic, and is applicable to both process bus and station bus applications.
Resumo:
Understanding consumer value is imperative in health care as the receipt of value drives the demand for health care services. While there is increasing research into health-care that adopts an economic approach to value, this paper investigates a non-financial exchange context and uses an experiential approach to value, guided by a social marketing approach to behaviour change. An experiential approach is deemed more appropriate for government health-care services that are free and for preventative rather than treatment purposes. Thus instead of using an illness-paradigm to view health services outcomes, we adopt a wellness paradigm. Using qualitative data gathered during 25 depth interviews the authors demonstrate how social marketing thinking has guided the identification of six themes that represent four dimensions of value (functional, emotional, social and altruistic) evident during the health care consumption process of a free government service.
Resumo:
For the 2005 season, Mackay Sugar and its growers agreed to implement a new cane payment system. The aim of the new system was to better align the business drivers between the mill and its growers and as a result improve business decision making. The technical basis of the new cane payment system included a fixed sharing of the revenue from sugar cane between the mill and growers. Further, the new system replaced the CCS formula with a new estimate of recoverable sugar (PRS) and introduced NIR for payment analyses. Significant mill and grower consultation processes led to the agreement to implement the new system in 2005 and this consultative approach has been reflected in two seasons of successful operation.
Resumo:
Worldwide, there is considerable attention to providing a supportive mathematics learning environment for young children because attitude formation and achievement in these early years of schooling have a lifelong impact. Key influences on young children during these early years are their teachers. Practising early years teachers‟ attitudes towards mathematics influence the teaching methods they employ, which in turn, affects young students‟ attitudes towards mathematics, and ultimately, their achievement. However, little is known about practising early years teachers‟ attitudes to mathematics or how these attitudes form, which is the focus of this study. The research questions were: 1. What attitudes do practising early years teachers hold towards mathematics? 2. How did the teachers‟ mathematics attitudes form? This study adopted an explanatory case study design (Yin, 2003) to investigate practising early years teachers‟ attitudes towards mathematics and the formation of these attitudes. The research took place in a Brisbane southside school situated in a middle socio-economic area. The site was chosen due to its accessibility to the researcher. The participant group consisted of 20 early years teachers. They each completed the Attitude Towards Mathematics Inventory (ATMI) (Schackow, 2005), which is a 40 item instrument that measures attitudes across the four dimensions of attitude, namely value, enjoyment, self-confidence and motivation. The teachers‟ total ATMI scores were classified according to five quintiles: strongly negative, negative, neutral, positive and strongly positive. The results of the survey revealed that these teachers‟ attitudes ranged across only three categories with one teacher classified as strongly positive, twelve teachers classified as positive and seven teachers classified as neutral. No teachers were identified as having negative or strongly negative attitudes. Subsequent to the surveys, six teachers with a breadth of attitudes were selected from the original cohort to participate in open-ended interviews to investigate the formation of their attitudes. The interview data were analysed according to the four dimensions of attitudes (value, enjoyment, self-confidence, motivation) and three stages of education (primary, secondary, tertiary). Highlighted in the findings is the critical impact of schooling experiences on the formation of student attitudes towards mathematics. Findings suggest that primary school experiences are a critical influence on the attitudes of adults who become early years teachers. These findings also indicate the vital role tertiary institutions play in altering the attitudes of preservice teachers who have had negative schooling experiences. Experiences that teachers indicated contributed to the formation of positive attitudes in their own education were games, group work, hands-on activities, positive feedback and perceived relevance. In contrast, negative experiences that teachers stated influenced their attitudes were insufficient help, rushed teaching, negative feedback and a lack of relevance of the content. These findings together with the literature on teachers‟ attitudes and mathematics education were synthesized in a model titled a Cycle of Early Years Teachers’ Attitudes Towards Mathematics. This model explains positive and negative influences on attitudes towards mathematics and how the attitudes of adults are passed on to children, who then as adults themselves, repeat the cycle by passing on attitudes to a new generation. The model can provide guidance for practising teachers and for preservice and inservice education about ways to foster positive influences to attitude formation in mathematics and inhibit negative influences. Two avenues for future research arise from the findings of this study both relating to attitudes and secondary school experiences. The first question relates to the resilience of attitudes, in particular, how an individual can maintain positive attitudes towards mathematics developed in primary school, despite secondary school experiences that typically have a negative influence on attitude. The second question relates to the relationship between attitudes and achievement, specifically, why secondary students achieve good grades in mathematics despite a lack of enjoyment, which is one of the dimensions of attitude.