887 resultados para Ontologies Representing the same Conceptualisation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

How do humans respond to their social context? This question is becoming increasingly urgent in a society where democracy requires that the citizens of a country help to decide upon its policy directions, and yet those citizens frequently have very little knowledge of the complex issues that these policies seek to address. Frequently, we find that humans make their decisions more with reference to their social setting, than to the arguments of scientists, academics, and policy makers. It is broadly anticipated that the agent based modelling (ABM) of human behaviour will make it possible to treat such social effects, but we take the position here that a more sophisticated treatment of context will be required in many such models. While notions such as historical context (where the past history of an agent might affect its later actions) and situational context (where the agent will choose a different action in a different situation) abound in ABM scenarios, we will discuss a case of a potentially changing context, where social effects can have a strong influence upon the perceptions of a group of subjects. In particular, we shall discuss a recently reported case where a biased worm in an election debate led to significant distortions in the reports given by participants as to who won the debate (Davis et al 2011). Thus, participants in a different social context drew different conclusions about the perceived winner of the same debate, with associated significant differences among the two groups as to who they would vote for in the coming election. We extend this example to the problem of modelling the likely electoral responses of agents in the context of the climate change debate, and discuss the notion of interference between related questions that might be asked of an agent in a social simulation that was intended to simulate their likely responses. A modelling technology which could account for such strong social contextual effects would benefit regulatory bodies which need to navigate between multiple interests and concerns, and we shall present one viable avenue for constructing such a technology. A geometric approach will be presented, where the internal state of an agent is represented in a vector space, and their social context is naturally modelled as a set of basis states that are chosen with reference to the problem space.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Emotional processes modulate the size of the eyeblink startle reflex in a picture-viewing paradigm, but it is unclear whether emotional processes are responsible for blink modulation in human conditioning. Experiment 1 involved an aversive differential conditioning phase followed by an extinction phase in which acoustic startle probes were presented during CS+, CS-, and intertrial intervals. Valence ratings and affective priming showed the CS+ was unpleasant postacquisition. Blink startle magnitude was larger during CS+ than during CS-. Experiment 2 used the same design in two groups trained with pleasant or unpleasant pictorial USs. Ratings and affective priming indicated that the CS+ had become pleasant or unpleasant in the respective group. Regardless of CS valence, blink startle was larger during CS+ than CS- in both groups. Thus, startle was not modulated by CS valence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Taxes are an important component of investing that is commonly overlooked in both the literature and in practice. For example, many understand that taxes will reduce an investment’s return, but less understood is the risk-sharing nature of taxes that also reduces the investment’s risk. This thesis examines how taxes affect the optimal asset allocation and asset location decision in an Australian environment. It advances the model of Horan & Al Zaman (2008), improving the method by which the present value of tax liabilities are calculated, by using an after-tax risk-free discount rate, and incorporating any new or reduced tax liabilities generated into its expected risk and return estimates. The asset allocation problem is examined for a range of different scenarios using Australian parameters, including different risk aversion levels, personal marginal tax rates, investment horizons, borrowing premiums, high or low inflation environments, and different starting cost bases. The findings support the Horan & Al Zaman (2008) conclusion that equities should be held in the taxable account. In fact, these findings are strengthened with most of the efficient frontier maximising equity holdings in the taxable account instead of only half. Furthermore, these findings transfer to the Australian case, where it is found that taxed Australian investors should always invest into equities first through the taxable account before investing in super. However, untaxed Australian investors should invest their equity first through superannuation. With borrowings allowed in the taxable account (no borrowing premium), Australian taxed investors should hold 100% of the superannuation account in the risk-free asset, while undertaking leverage in the taxable account to achieve the desired risk-return. Introducing a borrowing premium decreases the likelihood of holding 100% of super in the risk-free asset for taxable investors. The findings also suggest that the higher the marginal tax rate, the higher the borrowing premium in order to overcome this effect. Finally, as the investor’s marginal tax rate increases, the overall allocation to equities should increase due to the increased risk and return sharing caused by taxation, and in order to achieve the same risk/return level as the lower taxation level, the investor must take on more equity exposure. The investment horizon has a minimal impact on the optimal allocation decision in the absence of factors such as mean reversion and human capital.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since the 1980s the locus of manufacturing and some services have moved to countries of the Global South. Liberalization of trade and investment has added two billion people to world labour supply and brought workers everywhere into intense competition with each other. Under orthodox neoliberal and neoclassical approaches free trade and open investment should benefit all countries and lead to convergence. However considerable differences in wages and working hours exist between workers of the Global North and those of the Global South. The organising question for the thesis is why workers in different countries but the same industries get different wages. Empirical evidence reviewed in the thesis shows that productivity does not explain these wage differences and that workers in some parts of the South are more productive than workers in the North. Part of the thesis examines the usefulness of explanations drawn from Marxist, institutionalist and global commodity chain approaches. There is a long established argument in Marxist and neo-Marxist writings that differences between North and South result from imperialism and the exercise of power. This is the starting point to review ways of understanding divisions between workers as the outcome of a global class structure. In turn, a fault line is postulated between productive and unproductive labour that largely replicates the division between the Global North and the Global South. Workers and their organizations need shared actions if they are to resist global competition and wage disparities. Solidarity has been the clarion of progressive movements from the Internationals of the early C19th through to the current Global Unions and International Confederation of Trade Unions (ICTU). The thesis examines how nationalism and particular interests have undermined solidarity and reviews the major implications for current efforts to establish and advance a global labour position.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a study aimed at better understanding how staff and students adapt to new blended studio learning environments (BSLE’s), a group of 165 second year architecture students at a large school of architecture in Australia were separated into two different design studio learning environments. 70% of students were allocated to a traditional studio design learning environment (TSLE) and 30% to a new, high technology embedded, prototype digital learning laboratory. The digital learning laboratory was purpose designed for the case-study users, adapted Student-Centred Active Learning Environment for Undergraduate Programs (SCALE-UP) principles, and built as part of a larger university research project. The architecture students attended the same lectures, followed the same studio curriculum and completed the same pieces of assessment; the only major differences were the teaching staff and physical environment within which the studios were conducted. At the end of the semester, the staff and students were asked to complete a questionnaire about their experiences and preferences within the two respective learning environments. Following this, participants were invited to participate in focus groups, where a synergistic approach was effected. Using a dual method qualitative approach, the questionnaire and survey data were coded and extrapolated using both thematic analysis and grounded theory methodology. The results from these two different approaches were compared, contrasted and finally merged, to reveal six distinct emerging themes, which were instrumental in offering resistance or influencing adaptation to, the new BLSE. This paper reports on the study, discusses the major contributors to negative resistance and proposes points for consideration, when transitioning from a TSLE to a BLSE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many studies into construction procurement methods reveal evidence of a need to change the culture and attitude in the construction industry, transition from traditional adversarial relationships to cooperative and collaborative relationships. At the same time there is also increasing concern and discussion on alternative procurement methods, involving a movement away from traditional procurement systems. Relational contracting approaches, such as partnering and relationship management, are business strategies that align the objectives of clients, commercial participants and stakeholders. It provides a collaborative environment and a framework for all participants to adapt their behaviour to project objectives and allows for engagement of those subcontractors and suppliers down the supply chain. The efficacy of relationship management in the client and contractor groups is proven and well documented. However, the industry has a history of slow implementation of relational contracting down the supply chain. Furthermore, there exists little research on relationship management conducted in the supply chain context. This research aims to explore the association between relational contracting structures and processes and supply chain sustainability in the civil engineering construction industry. It endeavours to shed light on the practices and prerequisites for relationship management implementation success and for supply sustainability to develop. The research methodology is a triangulated approach based on Cheung.s (2006) earlier research where questionnaire survey, interviews and case studies were conducted. This new research includes a face-to-face questionnaire survey that was carried out with 100 professionals from 27 contracting organisations in Queensland from June 2008 to January 2009. A follow-up survey sub-questionnaire, further examining project participants. perspectives was sent to another group of professionals (as identified in the main questionnaire survey). Statistical analysis including multiple regression, correlation, principal component factor analysis and analysis of variance were used to identify the underlying dimensions and test the relationships among variables. Interviews and case studies were conducted to assist in providing a deeper understanding as well as explaining findings of the quantitative study. The qualitative approaches also gave the opportunity to critique and validate the research findings. This research presents the implementation of relationship management from the contractor.s perspective. Findings show that the adaption of relational contracting approach in the supply chain is found to be limited; contractors still prefer to keep the suppliers and subcontractors at arm.s length. This research shows that the degree of match and mismatch between organisational structuring and organisational process has an impact on staff.s commitment level and performance effectiveness. Key issues affecting performance effectiveness and relationship effectiveness include total influence between parties, access to information, personal acquaintance, communication process, risk identification, timely problem solving and commercial framework. Findings also indicate that alliance and Early Contractor Involvement (ECI) projects achieve higher performance effectiveness at both short-term and long-term levels compared to projects with either no or partial relationship management adopted.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Water uptake refers to the ability of atmospheric particles to take up water vapour from the surrounding atmosphere. This is an important property that affects particle size and phase and therefore influences many characteristics of aerosols relevant to air quality and climate. However, the water uptake properties of many important atmospheric aerosol systems, including those related to the oceans, are still not fully understood. Therefore, the primary aim of this PhD research program was to investigate the water uptake properties of marine aerosols. In particular, the effect of organics on marine aerosol water uptake was investigated. Field campaigns were conducted at remote coastal sites on the east coast of Australia (Agnes Water; March-April 2007) and west coast of Ireland (Mace Head; June 2007), and laboratory measurements were performed on bubble-generated sea spray aerosols. A combined Volatility-Hygroscopicity-Tandem Differential Mobility Analyser (VH-TDMA) was employed in all experiments. This system probes the changes in the hygroscopic properties of nanoparticles as volatile organic components are progressively evaporated. It also allows particle composition to be inferred from combined volatility-hygroscopicity measurements. Frequent new particle formation and growth events were observed during the Agnes Water campaign. The VH-TDMA was used to investigate freshly nucleated particles (17-22.5 nm) and it was found that the condensation of sulphate and/or organic vapours was responsible for driving particle growth during the events. Aitken mode particles (~40 nm) were also measured with the VH-TDMA. In 3 out of 18 VH-TDMA scans evaporation of a volatile, organic component caused a very large increase in hygroscopicity that could only be explained by an increase in the absolute water uptake of the particle residuals, and not merely an increase in their relative hygroscopicity. This indicated the presence of organic components that were suppressing the hygroscopic growth of mixed particles on the timescale of humidification in the VH-TDMA (6.5 secs). It was suggested that the suppression of water uptake was caused by either a reduced rate of hygroscopic growth due to the presence of organic films, or organic-inorganic interactions in solution droplets that had a negative effect on hygroscopicity. Mixed organic-inorganic particles were rarely observed by the VH-TDMA during the summer campaign conducted at Mace Head. The majority of particles below 100 nm in clean, marine air appeared to be sulphates neutralised to varying degrees by ammonia. On one unique day, 26 June 2007, particularly large concentrations of sulphate aerosol were observed and identified as volcanic emissions from Iceland. The degree of neutralisation of the sulphate aerosol by ammonia was calculated by the VH-TDMA and found to compare well with the same quantity measured by an aerosol mass spectrometer. This was an important verification of the VH-TMDA‘s ability to identify ammoniated sulphate aerosols based on the simultaneous measurement of aerosol volatility and hygroscopicity. A series of measurements were also conducted on sea spray aerosols generated from Moreton Bay seawater samples in a laboratory-based bubble chamber. Accumulation mode sea spray particles (38-173 nm) were found to contain only a minor organic fraction (< 10%) that had little effect on particle hygroscopicity. These results are important because previous studies have observed that accumulation mode sea spray particles are predominantly organic (~80% organic mass fraction). The work presented here suggests that this is not always the case, and that there may be currently unknown factors that are controlling the transfer of organics to the aerosol phase during the bubble bursting process. Taken together, the results of this research program have significantly improved our understanding of organic-containing marine aerosols and the way they interact with water vapour in the atmosphere.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Vehicle emitted particles are of significant concern based on their potential to influence local air quality and human health. Transport microenvironments usually contain higher vehicle emission concentrations compared to other environments, and people spend a substantial amount of time in these microenvironments when commuting. Currently there is limited scientific knowledge on particle concentration, passenger exposure and the distribution of vehicle emissions in transport microenvironments, partially due to the fact that the instrumentation required to conduct such measurements is not available in many research centres. Information on passenger waiting time and location in such microenvironments has also not been investigated, which makes it difficult to evaluate a passenger’s spatial-temporal exposure to vehicle emissions. Furthermore, current emission models are incapable of rapidly predicting emission distribution, given the complexity of variations in emission rates that result from changes in driving conditions, as well as the time spent in driving condition within the transport microenvironment. In order to address these scientific gaps in knowledge, this work conducted, for the first time, a comprehensive statistical analysis of experimental data, along with multi-parameter assessment, exposure evaluation and comparison, and emission model development and application, in relation to traffic interrupted transport microenvironments. The work aimed to quantify and characterise particle emissions and human exposure in the transport microenvironments, with bus stations and a pedestrian crossing identified as suitable research locations representing a typical transport microenvironment. Firstly, two bus stations in Brisbane, Australia, with different designs, were selected to conduct measurements of particle number size distributions, particle number and PM2.5 concentrations during two different seasons. Simultaneous traffic and meteorological parameters were also monitored, aiming to quantify particle characteristics and investigate the impact of bus flow rate, station design and meteorological conditions on particle characteristics at stations. The results showed higher concentrations of PN20-30 at the station situated in an open area (open station), which is likely to be attributed to the lower average daily temperature compared to the station with a canyon structure (canyon station). During precipitation events, it was found that particle number concentration in the size range 25-250 nm decreased greatly, and that the average daily reduction in PM2.5 concentration on rainy days compared to fine days was 44.2 % and 22.6 % at the open and canyon station, respectively. The effect of ambient wind speeds on particle number concentrations was also examined, and no relationship was found between particle number concentration and wind speed for the entire measurement period. In addition, 33 pairs of average half-hourly PN7-3000 concentrations were calculated and identified at the two stations, during the same time of a day, and with the same ambient wind speeds and precipitation conditions. The results of a paired t-test showed that the average half-hourly PN7-3000 concentrations at the two stations were not significantly different at the 5% confidence level (t = 0.06, p = 0.96), which indicates that the different station designs were not a crucial factor for influencing PN7-3000 concentrations. A further assessment of passenger exposure to bus emissions on a platform was evaluated at another bus station in Brisbane, Australia. The sampling was conducted over seven weekdays to investigate spatial-temporal variations in size-fractionated particle number and PM2.5 concentrations, as well as human exposure on the platform. For the whole day, the average PN13-800 concentration was 1.3 x 104 and 1.0 x 104 particle/cm3 at the centre and end of the platform, respectively, of which PN50-100 accounted for the largest proportion to the total count. Furthermore, the contribution of exposure at the bus station to the overall daily exposure was assessed using two assumed scenarios of a school student and an office worker. It was found that, although the daily time fraction (the percentage of time spend at a location in a whole day) at the station was only 0.8 %, the daily exposure fractions (the percentage of exposures at a location accounting for the daily exposure) at the station were 2.7% and 2.8 % for exposure to PN13-800 and 2.7% and 3.5% for exposure to PM2.5 for the school student and the office worker, respectively. A new parameter, “exposure intensity” (the ratio of daily exposure fraction and the daily time fraction) was also defined and calculated at the station, with values of 3.3 and 3.4 for exposure to PN13-880, and 3.3 and 4.2 for exposure to PM2.5, for the school student and the office worker, respectively. In order to quantify the enhanced emissions at critical locations and define the emission distribution in further dispersion models for traffic interrupted transport microenvironments, a composite line source emission (CLSE) model was developed to specifically quantify exposure levels and describe the spatial variability of vehicle emissions in traffic interrupted microenvironments. This model took into account the complexity of vehicle movements in the queue, as well as different emission rates relevant to various driving conditions (cruise, decelerate, idle and accelerate), and it utilised multi-representative segments to capture the accurate emission distribution for real vehicle flow. This model does not only helped to quantify the enhanced emissions at critical locations, but it also helped to define the emission source distribution of the disrupted steady flow for further dispersion modelling. The model then was applied to estimate particle number emissions at a bidirectional bus station used by diesel and compressed natural gas fuelled buses. It was found that the acceleration distance was of critical importance when estimating particle number emission, since the highest emissions occurred in sections where most of the buses were accelerating and no significant increases were observed at locations where they idled. It was also shown that emissions at the front end of the platform were 43 times greater than at the rear of the platform. The CLSE model was also applied at a signalled pedestrian crossing, in order to assess increased particle number emissions from motor vehicles when forced to stop and accelerate from rest. The CLSE model was used to calculate the total emissions produced by a specific number and mix of light petrol cars and diesel passenger buses including 1 car travelling in 1 direction (/1 direction), 14 cars / 1 direction, 1 bus / 1 direction, 28 cars / 2 directions, 24 cars and 2 buses / 2 directions, and 20 cars and 4 buses / 2 directions. It was found that the total emissions produced during stopping on a red signal were significantly higher than when the traffic moved at a steady speed. Overall, total emissions due to the interruption of the traffic increased by a factor of 13, 11, 45, 11, 41, and 43 for the above 6 cases, respectively. In summary, this PhD thesis presents the results of a comprehensive study on particle number and mass concentration, together with particle size distribution, in a bus station transport microenvironment, influenced by bus flow rates, meteorological conditions and station design. Passenger spatial-temporal exposure to bus emitted particles was also assessed according to waiting time and location along the platform, as well as the contribution of exposure at the bus station to overall daily exposure. Due to the complexity of the interrupted traffic flow within the transport microenvironments, a unique CLSE model was also developed, which is capable of quantifying emission levels at critical locations within the transport microenvironment, for the purpose of evaluating passenger exposure and conducting simulations of vehicle emission dispersion. The application of the CLSE model at a pedestrian crossing also proved its applicability and simplicity for use in a real-world transport microenvironment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The last two decades have seen the application of six sigma methodologies in many manufacturing and also some service industries. Six sigma’s success in manufacturing is well published. But the same cannot be said about its implementation in services. Applying six sigma to services is still limited to only a small number of services. This paper reviews the application of six sigma in service industries. Emphasis is given to application issues such as what are necessary critical success factors and key performance indicators in order for a project to be successful. A pilot study was carried out in order to highlight the issues discussed. Regardless of the service that is provided, a number of guidelines can be commonly applied to varying types of services. The aim of this paper is to help widen the scope of six sigma application in services.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a comprehensive study to find the most efficient bitrate requirement to deliver mobile video that optimizes bandwidth, while at the same time maintains good user viewing experience. In the study, forty participants were asked to choose the lowest quality video that would still provide for a comfortable and long-term viewing experience, knowing that higher video quality is more expensive and bandwidth intensive. This paper proposes the lowest pleasing bitrates and corresponding encoding parameters for five different content types: cartoon, movie, music, news and sports. It also explores how the lowest pleasing quality is influenced by content type, image resolution, bitrate, and user gender, prior viewing experience, and preference. In addition, it analyzes the trajectory of users’ progression while selecting the lowest pleasing quality. The findings reveal that the lowest bitrate requirement for a pleasing viewing experience is much higher than that of the lowest acceptable quality. Users’ criteria for the lowest pleasing video quality are related to the video’s content features, as well as its usage purpose and the user’s personal preferences. These findings can provide video providers guidance on what quality they should offer to please mobile users.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this chapter we take a high-level view of social media, focusing not on specific applications, domains, websites, or technologies, but instead our interest is in the forms of engagement that social media engender. This is not to suggest that all social media are the same, or even that everyone’s experience with any particular medium or technology is the same. However, we argue common issues arise that characterize social media in a broad sense, and provide a different analytic perspective than we would gain from looking at particular systems or applications. We do not take the perspective that social life merely happens “within” such systems, nor that social life “shapes” such systems, but rather these systems provide a site for the production of social and cultural reality – that media are always already social and the engagement with, in, and through media of all sorts is a thoroughly social phenomenon. Accordingly, in this chapter, we examine two phenomena concurrently: social life seen through the lens of social media, and social media seen through the lens of social life. In particular, we want to understand the ways that a set of broad phenomena concerning forms of participation in social life is articulated in the domain of social media. As a conceptual entry-point, we use the notion of the “moral economy” as a means to open up the domain of inquiry. We first discuss the notion of the “moral economy” as it has been used by a number of social theorists, and then identify a particular set of conceptual concerns that we suggest link it to the phenomena of social networking in general. We then discuss a series of examples drawn from a range of studies to elaborate and ground this conceptual framework in empirical data. This leads us to a broader consideration of audiences and publics in social media that, we suggest, holds important lessons for how we treat social media analytically.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Marketers spend considerable resources to motivate people to consume their products and services as a means of goal attainment (Bagozzi and Dholakia, 1999). Why people increase, decrease, or stop consuming some products is based largely on how well they perceive they are doing in pursuit of their goals (Carver and Scheier, 1992). Yet despite the importance for marketers in understanding how current performance influences a consumer’s future efforts, this topic has received little attention in marketing research. Goal researchers generally agree that feedback about how well or how poorly people are doing in achieving their goals affects their motivation (Bandura and Cervone, 1986; Locke and Latham, 1990). Yet there is less agreement about whether positive and negative performance feedback increases or decreases future effort (Locke and Latham, 1990). For instance, while a customer of a gym might cancel his membership after receiving negative feedback about his fitness, the same negative feedback might cause another customer to visit the gym more often to achieve better results. A similar logic can apply to many products and services from the use of cosmetics to investing in mutual funds. The present research offers managers key insights into how to engage customers and keep them motivated. Given that connecting customers with the company is a top research priority for managers (Marketing Science Institute, 2006), this article provides suggestions for performance metrics including four questions that managers can use to apply the findings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Emerging from the challenge to reduce energy consumption in buildings is the need for energy simulation to be used more effectively to support integrated decision making in early design. As a critical response to a Green Star case study, we present DEEPA, a parametric modeling framework that enables architects and engineers to work at the same semantic level to generate shared models for energy simulation. A cloud-based toolkit provides web and data services for parametric design software that automate the process of simulating and tracking design alternatives, by linking building geometry more directly to analysis inputs. Data, semantics, models and simulation results can be shared on the fly. This allows the complex relationships between architecture, building services and energy consumption to be explored in an integrated manner, and decisions to be made collaboratively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The recent exponential rise in the number of behaviour disorders has been the focus of a wide range of commentaries, ranging from the pedagogic and the administrative, to the sociological, and even the legal. This book will be the first to apply, in a systematic and thorough manner, the ideas of the foundational discipline of philosophy. A number of philosophical tools are applied here, tools arising through the medium of the traditional philosophical debates, such as those concerning governance, truth, logic, ethics, free-will, law and language. Each forms a separate chapter, but together they constitute a comprehensive, rigorous and original insight into what is now an important set of concerns for all those interested in the governance of children. The intention is threefold: first, to demonstrate the utility, accessibility and effectiveness of philosophical ideas within this important academic area. Philosophy does not have to be regarded an arcane and esoteric discipline, with only limited contemporary application, far from it. Second, the book offers a new set of approaches and ideas for both researchers and practitioners within education, a field is in danger of continually using the same ideas, to endlessly repeat the same conclusions. Third, the book offers a viable alternative to the dominant psychological model which increasingly employs pathology as its central rationale for conduct. The book would not only be of interest to mainstream educators, and to those students and academics interested in philosophy, and more specifically, the application of philosophical ideas to educational issues, it would also be an appropriate text for courses on education and difference, and due to the breadth of the philosophical issues addressed, courses on applied philosophy.