982 resultados para Software metrics
Resumo:
Despite the increasing use of groupware technologies in education, there is little evidence of their impact, especially within an enquiry-based learning (EBL) context. In this paper, we examine the use of a commercial standard Group Intelligence software called GroupSystems®ThinkTank. To date, ThinkTank has been adopted mainly in the USA and supports teams in generating ideas, categorising, prioritising, voting and multi-criteria decision-making and automatically generates a report at the end of each session. The software was used by students carrying out an EBL project, set by employers, for a full academic year. The criteria for assessing the impact of ThinkTank on student learning were those of creativity, participation, productivity, engagement and understanding. Data was collected throughout the year using a combination of interviews and questionnaires, and written feedback from employers. The overall findings show an increase in levels of productivity and creativity, evidence of a deeper understanding of their work but some variation in attitudes towards participation in the early stages of the project.
Resumo:
Metrics are often used to compare the climate impacts of emissions from various sources, sectors or nations. These are usually based on global-mean input, and so there is the potential that important information on smaller scales is lost. Assuming a non-linear dependence of the climate impact on local surface temperature change, we explore the loss of information about regional variability that results from using global-mean input in the specific case of heterogeneous changes in ozone, methane and aerosol concentrations resulting from emissions from road traffic, aviation and shipping. Results from equilibrium simulations with two general circulation models are used. An alternative metric for capturing the regional climate impacts is investigated. We find that the application of a metric that is first calculated locally and then averaged globally captures a more complete and informative signal of climate impact than one that uses global-mean input. The loss of information when heterogeneity is ignored is largest in the case of aviation. Further investigation of the spatial distribution of temperature change indicates that although the pattern of temperature response does not closely match the pattern of the forcing, the forcing pattern still influences the response pattern on a hemispheric scale. When the short-lived transport forcing is superimposed on present-day anthropogenic CO2 forcing, the heterogeneity in the temperature response to CO2 dominates. This suggests that the importance of including regional climate impacts in global metrics depends on whether small sectors are considered in isolation or as part of the overall climate change.
Resumo:
Multi-gas approaches to climate change policies require a metric establishing ‘equivalences’ among emissions of various species. Climate scientists and economists have proposed four kinds of such metrics and debated their relative merits. We present a unifying framework that clarifies the relationships among them. We show, as have previous authors, that the global warming potential (GWP), used in international law to compare emissions of greenhouse gases, is a special case of the global damage potential (GDP), assuming (1) a finite time horizon, (2) a zero discount rate, (3) constant atmospheric concentrations, and (4) impacts that are proportional to radiative forcing. Both the GWP and GDP follow naturally from a cost–benefit framing of the climate change issue. We show that the global temperature change potential (GTP) is a special case of the global cost potential (GCP), assuming a (slight) fall in the global temperature after the target is reached. We show how the four metrics should be generalized if there are intertemporal spillovers in abatement costs, distinguishing between private (e.g., capital stock turnover) and public (e.g., induced technological change) spillovers. Both the GTP and GCP follow naturally from a cost-effectiveness framing of the climate change issue. We also argue that if (1) damages are zero below a threshold and (2) infinitely large above a threshold, then cost-effectiveness analysis and cost–benefit analysis lead to identical results. Therefore, the GCP is a special case of the GDP. The UN Framework Convention on Climate Change uses the GWP, a simplified cost–benefit concept. The UNFCCC is framed around the ultimate goal of stabilizing greenhouse gas concentrations. Once a stabilization target has been agreed under the convention, implementation is clearly a cost-effectiveness problem. It would therefore be more consistent to use the GCP or its simplification, the GTP.
Resumo:
We evaluate the response to regional and latitudinal changes in aircraft NOx emissions using several climate metrics (radiative forcing (RF), Global Warming Potential (GWP), Global Temperature change Potential (GTP)). Global chemistry transport model integrations were performed with sustained perturbations in regional aircraft and aircraft-like NOx emissions. The RF due to the resulting ozone and methane changes is then calculated. We investigate the impact of emission changes for specific geographical regions (approximating to USA, Europe, India and China) and cruise altitude emission changes in discrete latitude bands covering both hemispheres. We find that lower latitude emission changes (per Tg N) cause ozone and methane RFs that are about a factor of 6 larger than those from higher latitude emission changes. The net RF is positive for all experiments. The meridional extent of the RF is larger for low latitude emissions. GWPs for all emission changes are positive, with tropical emissions having the largest values; the sign of the GTP depends on the choice of time horizon.
Resumo:
We examine the effect of ozone damage to vegetation as caused by anthropogenic emissions of ozone precursor species and quantify it in terms of its impact on terrestrial carbon stores. A simple climate model is then used to assess the expected changes in global surface temperature from the resulting perturbations to atmospheric concentrations of carbon dioxide, methane, and ozone. The concept of global temperature change potential (GTP) metric, which relates the global average surface temperature change induced by the pulse emission of a species to that induced by a unit mass of carbon dioxide, is used to characterize the impact of changes in emissions of ozone precursors on surface temperature as a function of time. For NOx emissions, the longer-timescale methane perturbation is of the opposite sign to the perturbations in ozone and carbon dioxide, so NOx emissions are warming in the short term, but cooling in the long term. For volatile organic compound (VOC), CO, and methane emissions, all the terms are warming for an increase in emissions. The GTPs for the 20 year time horizon are strong functions of emission location, with a large component of the variability owing to the different vegetation responses on different continents. At this time horizon, the induced change in the carbon cycle is the largest single contributor to the GTP metric for NOx and VOC emissions. For NOx emissions, we estimate a GTP20 of −9 (cooling) to +24 (warming) depending on assumptions of the sensitivity of vegetation types to ozone damage.
Resumo:
A favoured method of assimilating information from state-of-the-art climate models into integrated assessment models of climate impacts is to use the transient climate response (TCR) of the climate models as an input, sometimes accompanied by a pattern matching approach to provide spatial information. More recent approaches to the problem use TCR with another independent piece of climate model output: the land-sea surface warming ratio (φ). In this paper we show why the use of φ in addition to TCR has such utility. Multiple linear regressions of surface temperature change onto TCR and φ in 22 climate models from the CMIP3 multi-model database show that the inclusion of φ explains a much greater fraction of the inter-model variance than using TCR alone. The improvement is particularly pronounced in North America and Eurasia in the boreal summer season, and in the Amazon all year round. The use of φ as the second metric is beneficial for three reasons: firstly it is uncorrelated with TCR in state-of-the-art climate models and can therefore be considered as an independent metric; secondly, because of its projected time-invariance, the magnitude of φ is better constrained than TCR in the immediate future; thirdly, the use of two variables is much simpler than approaches such as pattern scaling from climate models. Finally we show how using the latest estimates of φ from climate models with a mean value of 1.6—as opposed to previously reported values of 1.4—can significantly increase the mean time-integrated discounted damage projections in a state-of-the-art integrated assessment model by about 15 %. When compared to damages calculated without the inclusion of the land-sea warming ratio, this figure rises to 65 %, equivalent to almost 200 trillion dollars over 200 years.
Resumo:
In this article, we present FACSGen 2.0, new animation software for creating static and dynamic threedimensional facial expressions on the basis of the Facial Action Coding System (FACS). FACSGen permits total control over the action units (AUs), which can be animated at all levels of intensity and applied alone or in combination to an infinite number of faces. In two studies, we tested the validity of the software for the AU appearance defined in the FACS manual and the conveyed emotionality of FACSGen expressions. In Experiment 1, four FACS-certified coders evaluated the complete set of 35 single AUs and 54 AU combinations for AU presence or absence, appearance quality, intensity, and asymmetry. In Experiment 2, lay participants performed a recognition task on emotional expressions created with FACSGen software and rated the similarity of expressions displayed by human and FACSGen faces. Results showed good to excellent classification levels for all AUs by the four FACS coders, suggesting that the AUs are valid exemplars of FACS specifications. Lay participants’ recognition rates for nine emotions were high, and comparisons of human and FACSGen expressions were very similar. The findings demonstrate the effectiveness of the software in producing reliable and emotionally valid expressions, and suggest its application in numerous scientific areas, including perception, emotion, and clinical and euroscience research.
Resumo:
Organizations introduce acceptable use policies to deter employee computer misuse. Despite the controlling, monitoring and other forms of interventions employed, some employees misuse the organizational computers to carry out their personal work such as sending emails, surfing internet, chatting, playing games etc. These activities not only waste productive time of employees but also bring a risk to the organization. A questionnaire was administrated to a random sample of employees selected from large and medium scale software development organizations, which measured the work computer misuse levels and the factors that influence such behavior. The presence of guidelines provided no evidence of significant effect on the level of employee computer misuse. Not having access to Internet /email away from work and organizational settings were identified to be the most significant influences of work computer misuse.
Resumo:
The 3rd World Chess Software Championship took place in Yokohama, Japan during August 2013. It pits chess engines against each other on a common hardware platform - in this instance, the Intel i7 2740 Ivy Bridge with 16GB RAM supporting a potential eight processing threads. It was narrowly won by HIARCS from JUNIOR and PANDIX with JONNY, SHREDDER and MERLIN taking the remaining places. Games, occasionally annotated, are available here.
The capability-affordance model: a method for analysis and modelling of capabilities and affordances
Resumo:
Existing capability models lack qualitative and quantitative means to compare business capabilities. This paper extends previous work and uses affordance theories to consistently model and analyse capabilities. We use the concept of objective and subjective affordances to model capability as a tuple of a set of resource affordance system mechanisms and action paths, dependent on one or more critical affordance factors. We identify an affordance chain of subjective affordances by which affordances work together to enable an action and an affordance path that links action affordances to create a capability system. We define the mechanism and path underlying capability. We show how affordance modelling notation, AMN, can represent affordances comprising a capability. We propose a method to quantitatively and qualitatively compare capabilities using efficiency, effectiveness and quality metrics. The method is demonstrated by a medical example comparing the capability of syringe and needless anaesthetic systems.
Resumo:
This paper presents a software-based study of a hardware-based non-sorting median calculation method on a set of integer numbers. The method divides the binary representation of each integer element in the set into bit slices in order to find the element located in the middle position. The method exhibits a linear complexity order and our analysis shows that the best performance in execution time is obtained when slices of 4-bit in size are used for 8-bit and 16-bit integers, in mostly any data set size. Results suggest that software implementation of bit slice method for median calculation outperforms sorting-based methods with increasing improvement for larger data set size. For data set sizes of N > 5, our simulations show an improvement of at least 40%.
Resumo:
Brain injuries, including stroke, can be debilitating incidents with potential for severe long term effects; many people stop making significant progress once leaving in-patient medical care and are unable to fully restore their quality of life when returning home. The aim of this collaborative project, between the Royal Berkshire NHS Foundation Trust and the University of Reading, is to provide a low cost portable system that supports a patient's condition and their recovery in hospital or at home. This is done by providing engaging applications with targeted gameplay that is individually tailored to the rehabilitation of the patient's symptoms. The applications are capable of real-time data capture and analysis in order to provide information to therapists on patient progress and to further improve the personalized care that an individual can receive.