845 resultados para Wireless performance metrics
Resumo:
Shrinking product lifecycles, tough international competition, swiftly changing technologies, ever increasing customer quality expectation and demanding high variety options are some of the forces that drive next generation of development processes. To overcome these challenges, design cost and development time of product has to be reduced as well as quality to be improved. Design reuse is considered one of the lean strategies to win the race in this competitive environment. design reuse can reduce the product development time, product development cost as well as number of defects which will ultimately influence the product performance in cost, time and quality. However, it has been found that no or little work has been carried out for quantifying the effectiveness of design reuse in product development performance such as design cost, development time and quality. Therefore, in this study we propose a systematic design reuse based product design framework and developed a design leanness index (DLI) as a measure of effectiveness of design reuse. The DLI is a representative measure of reuse effectiveness in cost, development time and quality. Through this index, a clear relationship between reuse measure and product development performance metrics has been established. Finally, a cost based model has been developed to maximise the design leanness index for a product within the given set of constraints achieving leanness in design process.
Resumo:
Marketers spend considerable resources to motivate people to consume their products and services as a means of goal attainment (Bagozzi and Dholakia, 1999). Why people increase, decrease, or stop consuming some products is based largely on how well they perceive they are doing in pursuit of their goals (Carver and Scheier, 1992). Yet despite the importance for marketers in understanding how current performance influences a consumer’s future efforts, this topic has received little attention in marketing research. Goal researchers generally agree that feedback about how well or how poorly people are doing in achieving their goals affects their motivation (Bandura and Cervone, 1986; Locke and Latham, 1990). Yet there is less agreement about whether positive and negative performance feedback increases or decreases future effort (Locke and Latham, 1990). For instance, while a customer of a gym might cancel his membership after receiving negative feedback about his fitness, the same negative feedback might cause another customer to visit the gym more often to achieve better results. A similar logic can apply to many products and services from the use of cosmetics to investing in mutual funds. The present research offers managers key insights into how to engage customers and keep them motivated. Given that connecting customers with the company is a top research priority for managers (Marketing Science Institute, 2006), this article provides suggestions for performance metrics including four questions that managers can use to apply the findings.
Resumo:
Airports represent the epitome of complex systems with multiple stakeholders, multiple jurisdictions and complex interactions between many actors. The large number of existing models that capture different aspects of the airport are a testament to this. However, these existing models do not consider in a systematic sense modelling requirements nor how stakeholders such as airport operators or airlines would make use of these models. This can detrimentally impact on the verification and validation of models and makes the development of extensible and reusable modelling tools difficult. This paper develops from the Concept of Operations (CONOPS) framework a methodology to help structure the review and development of modelling capabilities and usage scenarios. The method is applied to the review of existing airport terminal passenger models. It is found that existing models can be broadly categorised according to four usage scenarios: capacity planning, operational planning and design, security policy and planning, and airport performance review. The models, the performance metrics that they evaluate and their usage scenarios are discussed. It is found that capacity and operational planning models predominantly focus on performance metrics such as waiting time, service time and congestion whereas performance review models attempt to link those to passenger satisfaction outcomes. Security policy models on the other hand focus on probabilistic risk assessment. However, there is an emerging focus on the need to be able to capture trade-offs between multiple criteria such as security and processing time. Based on the CONOPS framework and literature findings, guidance is provided for the development of future airport terminal models.
Resumo:
The Australian e-Health Research Centre and Queensland University of Technology recently participated in the TREC 2011 Medical Records Track. This paper reports on our methods, results and experience using a concept-based information retrieval approach. Our concept-based approach is intended to overcome specific challenges we identify in searching medical records. Queries and documents are transformed from their term-based originals into medical concepts as de ned by the SNOMED-CT ontology. Results show our concept-based approach performed above the median in all three performance metrics: bref (+12%), R-prec (+18%) and Prec@10 (+6%).
Resumo:
The concept of Six Sigma was initiated in the 1980s by Motorola. Since then it has been implemented in several manufacturing and service organizations. Till now Six Sigma implementation is mostly limited to healthcare and financial services in private sector. Its implementation is now gradually picking up in services such as call center, education, construction and related engineering etc. in private as well as public sector. Through a literature review, a questionnaire survey, and multiple case study approach the paper develops a conceptual framework to facilitate widening the scope of Six Sigma implementation in service organizations. Using grounded theory methodology, this study develops theory for Six Sigma implementation in service organizations. The study involves a questionnaire survey and case studies to understand and build a conceptual framework. The survey was conducted in service organizations in Singapore and exploratory in nature. The case studies involved three service organizations which implemented Six Sigma. The objective is to explore and understand the issues highlighted by the survey and the literature. The findings confirm the inclusion of critical success factors, critical-to-quality characteristics, and set of tools and techniques as observed from the literature. In case of key performance indicator, there are different interpretations about it in literature and also by industry practitioners. Some literature explain key performance indicator as performance metrics whereas some feel it as key process input or output variables, which is similar to interpretations by practitioners of Six Sigma. The response of not relevant and unknown to us as reasons for not implementing Six Sigma shows the need for understanding specific requirements of service organizations. Though much theoretical description is available about Six Sigma, but there has been limited rigorous academic research on it. This gap is far more pronounced about Six Sigma implementation in service organizations, where the theory is not mature enough. Identifying this need, the study contributes by going through theory building exercise and developing a conceptual framework to understand the issues involving its implementation in service organizations.
Resumo:
The Australian e-Health Research Centre and Queensland University of Technology recently participated in the TREC 2012 Medical Records Track. This paper reports on our methods, results and experience using an approach that exploits the concept and inter-concept relationships defined in the SNOMED CT medical ontology. Our concept-based approach is intended to overcome specific challenges in searching medical records, namely vocabulary mismatch and granularity mismatch. Queries and documents are transformed from their term-based originals into medical concepts as defined by the SNOMED CT ontology, this is done to tackle vocabulary mismatch. In addition, we make use of the SNOMED CT parent-child `is-a' relationships between concepts to weight documents that contained concept subsumed by the query concepts; this is done to tackle the problem of granularity mismatch. Finally, we experiment with other SNOMED CT relationships besides the is-a relationship to weight concepts related to query concepts. Results show our concept-based approach performed significantly above the median in all four performance metrics. Further improvements are achieved by the incorporation of weighting subsumed concepts, overall leading to improvement above the median of 28% infAP, 10% infNDCG, 12% R-prec and 7% Prec@10. The incorporation of other relations besides is-a demonstrated mixed results, more research is required to determined which SNOMED CT relationships are best employed when weighting related concepts.
Resumo:
This paper presents a long-term experiment where a mobile robot uses adaptive spherical views to localize itself and navigate inside a non-stationary office environment. The office contains seven members of staff and experiences a continuous change in its appearance over time due to their daily activities. The experiment runs as an episodic navigation task in the office over a period of eight weeks. The spherical views are stored in the nodes of a pose graph and they are updated in response to the changes in the environment. The updating mechanism is inspired by the concepts of long- and short-term memories. The experimental evaluation is done using three performance metrics which evaluate the quality of both the adaptive spherical views and the navigation over time.
Resumo:
One of the aims of Deleuze. Guattari. Schizoanalysis. Education. is to focus on the radical reconfiguration that education is undergoing, impacting educator, administrator, institution and ‘sector’ alike. More to the point, it is the responses to that process of reconfiguration - this newly emerging assemblage - that are a key focal point in this issue. Essential to these responses, we propose, is Deleuze and Guattari’s method of schizonalysis, which offers a way to not only understand the rules of this new game, but also, hopefully, some escape from the promise of a brave new world of continuous education and motivation. A brave new world of digitised courses, impersonal and corporate expertise, updatable performance metrics, Massive Open Online Courses (MOOCs), learning analytics, transformative teaching and learning, online high-stakes testing in the name of transforming and augmenting human capital overlays the corporeal practices of institutional surveillance, examination and categorical sorting. A brave new world, importantly, where people’s continuous education is instituted less, or not simply, through disciplinary practices, and increasingly through a constant and continuous sampling and profiling of not simply performance but their activity, measured against the profiled activity of a ‘like’ age group, person, or an institution. This continuous education, including the sampling that accompanies it, we are all informed through various information and marketing campaigns, is in our best interest. An interest that is driven and governed by an ever-increasing corporatisation and monetisation of ‘the knowledge sector’, as well as an interest that is sustained through an ever-increasing, as well as continuous, debt.
Resumo:
This paper proposes new metrics and a performance-assessment framework for vision-based weed and fruit detection and classification algorithms. In order to compare algorithms, and make a decision on which one to use fora particular application, it is necessary to take into account that the performance obtained in a series of tests is subject to uncertainty. Such characterisation of uncertainty seems not to be captured by the performance metrics currently reported in the literature. Therefore, we pose the problem as a general problem of scientific inference, which arises out of incomplete information, and propose as a metric of performance the(posterior) predictive probabilities that the algorithms will provide a correct outcome for target and background detection. We detail the framework through which these predicted probabilities can be obtained, which is Bayesian in nature. As an illustration example, we apply the framework to the assessment of performance of four algorithms that could potentially be used in the detection of capsicums (peppers).
Resumo:
A novel methodology for modeling the effects of process variations on circuit delay performance is proposed by relating the variations in process parameters to variations in delay metric of a complex digital circuit. The delay of a 2-input NAND gate with 65nm gate length transistors is extensively characterized by mixed-mode simulations which is then used as a library element. The variation in saturation current Ionat the device level, and the variation in rising/falling edge stage delay for the NAND gate at the circuit level, are taken as performance metrics. A 4-bit x 4-bit Wallace tree multiplier circuit is used as a representative combinational circuit to demonstrate the proposed methodology. The variation in the multiplier delay is characterized, to obtain delay distributions, by an extensive Monte Carlo analysis. An analytical model based on CV/I metric is proposed, to extend this methodology for a generic technology library with a variety of library elements.
Resumo:
In this paper, a fractional order proportional-integral controller is developed for a miniature air vehicle for rectilinear path following and trajectory tracking. The controller is implemented by constructing a vector field surrounding the path to be followed, which is then used to generate course commands for the miniature air vehicle. The fractional order proportional-integral controller is simulated using the fundamentals of fractional calculus, and the results for this controller are compared with those obtained for a proportional controller and a proportional integral controller. In order to analyze the performance of the controllers, four performance metrics, namely (maximum) overshoot, control effort, settling time and integral of the timed absolute error cost, have been selected. A comparison of the nominal as well as the robust performances of these controllers indicates that the fractional order proportional-integral controller exhibits the best performance in terms of ITAE while showing comparable performances in all other aspects.
Resumo:
Models of river flow time series are essential in efficient management of a river basin. It helps policy makers in developing efficient water utilization strategies to maximize the utility of scarce water resource. Time series analysis has been used extensively for modeling river flow data. The use of machine learning techniques such as support-vector regression and neural network models is gaining increasing popularity. In this paper we compare the performance of these techniques by applying it to a long-term time-series data of the inflows into the Krishnaraja Sagar reservoir (KRS) from three tributaries of the river Cauvery. In this study flow data over a period of 30 years from three different observation points established in upper Cauvery river sub-basin is analyzed to estimate their contribution to KRS. Specifically, ANN model uses a multi-layer feed forward network trained with a back-propagation algorithm and support vector regression with epsilon intensive-loss function is used. Auto-regressive moving average models are also applied to the same data. The performance of different techniques is compared using performance metrics such as root mean squared error (RMSE), correlation, normalized root mean squared error (NRMSE) and Nash-Sutcliffe Efficiency (NSE).
Resumo:
Virtualization is one of the key enabling technologies for Cloud computing. Although it facilitates improved utilization of resources, virtualization can lead to performance degradation due to the sharing of physical resources like CPU, memory, network interfaces, disk controllers, etc. Multi-tenancy can cause highly unpredictable performance for concurrent I/O applications running inside virtual machines that share local disk storage in Cloud. Disk I/O requests in a typical Cloud setup may have varied requirements in terms of latency and throughput as they arise from a range of heterogeneous applications having diverse performance goals. This necessitates providing differential performance services to different I/O applications. In this paper, we present PriDyn, a novel scheduling framework which is designed to consider I/O performance metrics of applications such as acceptable latency and convert them to an appropriate priority value for disk access based on the current system state. This framework aims to provide differentiated I/O service to various applications and ensures predictable performance for critical applications in multi-tenant Cloud environment. We demonstrate through experimental validations on real world I/O traces that this framework achieves appreciable enhancements in I/O performance, indicating that this approach is a promising step towards enabling QoS guarantees on Cloud storage.
Resumo:
The spatial error structure of daily precipitation derived from the latest version 7 (v7) tropical rainfall measuring mission (TRMM) level 2 data products are studied through comparison with the Asian precipitation highly resolved observational data integration toward evaluation of the water resources (APHRODITE) data over a subtropical region of the Indian subcontinent for the seasonal rainfall over 6 years from June 2002 to September 2007. The data products examined include v7 data from the TRMM radiometer Microwave Imager (TMI) and radar precipitation radar (PR), namely, 2A12, 2A25, and 2B31 (combined data from PR and TMI). The spatial distribution of uncertainty from these data products were quantified based on performance metrics derived from the contingency table. For the seasonal daily precipitation over a subtropical basin in India, the data product of 2A12 showed greater skill in detecting and quantifying the volume of rainfall when compared with the 2A25 and 2B31 data products. Error characterization using various error models revealed that random errors from multiplicative error models were homoscedastic and that they better represented rainfall estimates from 2A12 algorithm. Error decomposition techniques performed to disentangle systematic and random errors verify that the multiplicative error model representing rainfall from 2A12 algorithm successfully estimated a greater percentage of systematic error than 2A25 or 2B31 algorithms. Results verify that although the radiometer derived 2A12 rainfall data is known to suffer from many sources of uncertainties, spatial analysis over the case study region of India testifies that the 2A12 rainfall estimates are in a very good agreement with the reference estimates for the data period considered.