820 resultados para Scarcity of available alternatives
Resumo:
The present scarcity of operational knowledge-based systems (KBS) has been attributed, in part, to an inadequate consideration shown to user interface design during development. From a human factors perspective the problem has stemmed from an overall lack of user-centred design principles. Consequently the integration of human factors principles and techniques is seen as a necessary and important precursor to ensuring the implementation of KBS which are useful to, and usable by, the end-users for whom they are intended. Focussing upon KBS work taking place within commercial and industrial environments, this research set out to assess both the extent to which human factors support was presently being utilised within development, and the future path for human factors integration. The assessment consisted of interviews conducted with a number of commercial and industrial organisations involved in KBS development; and a set of three detailed case studies of individual KBS projects. Two of the studies were carried out within a collaborative Alvey project, involving the Interdisciplinary Higher Degrees Scheme (IHD) at the University of Aston in Birmingham, BIS Applied Systems Ltd (BIS), and the British Steel Corporation. This project, which had provided the initial basis and funding for the research, was concerned with the application of KBS to the design of commercial data processing (DP) systems. The third study stemmed from involvement on a KBS project being carried out by the Technology Division of the Trustees Saving Bank Group plc. The preliminary research highlighted poor human factors integration. In particular, there was a lack of early consideration of end-user requirements definition and user-centred evaluation. Instead concentration was given to the construction of the knowledge base and prototype evaluation with the expert(s). In response to this identified problem, a set of methods was developed that was aimed at encouraging developers to consider user interface requirements early on in a project. These methods were then applied in the two further projects, and their uptake within the overall development process was monitored. Experience from the two studies demonstrated that early consideration of user interface requirements was both feasible, and instructive for guiding future development work. In particular, it was shown a user interface prototype could be used as a basis for capturing requirements at the functional (task) level, and at the interface dialogue level. Extrapolating from this experience, a KBS life-cycle model is proposed which incorporates user interface design (and within that, user evaluation) as a largely parallel, rather than subsequent, activity to knowledge base construction. Further to this, there is a discussion of several key elements which can be seen as inhibiting the integration of human factors within KBS development. These elements stem from characteristics of present KBS development practice; from constraints within the commercial and industrial development environments; and from the state of existing human factors support.
Resumo:
This study investigates informal conversations between native English speakers and international students living and studying in the UK. 10 NNS participants recorded themselves during conversations with native speakers. The audio-recordings were transcribed and a fine-grained, qualitative analysis was employed to examine how the participants jointly achieved both coherence and understanding in the conversations, and more specifically how the NNSs contributed to this achievement. The key areas of investigation focused on features of topic management, such as topic initiations, changes and transitions, and on the impact which any communicative difficulties may have on the topical continuity of the conversations. The data suggested that these conversations flowed freely and coherently, and were marked by a relative scarcity of the communicative difficulties often associated with NS-NNS interactions. Moreover, language difficulties were found to have minimal impact on the topic development of the conversations. Unlike most previous research in the field, the data further indicated that the NNSs were able to make active contributions to the initiation and change of topics, and to employ a range of strategies to manage these effectively and coherently. The study considers the implications which the findings may have for teaching and learning, for second language acquisition research, and for non-native speakers everywhere.
Resumo:
Recent advances in technology have produced a significant increase in the availability of free sensor data over the Internet. With affordable weather monitoring stations now available to individual meteorology enthusiasts a reservoir of real time data such as temperature, rainfall and wind speed can now be obtained for most of the United States and Europe. Despite the abundance of available data, obtaining useable information about the weather in your local neighbourhood requires complex processing that poses several challenges. This paper discusses a collection of technologies and applications that harvest, refine and process this data, culminating in information that has been tailored toward the user. In this case we are particularly interested in allowing a user to make direct queries about the weather at any location, even when this is not directly instrumented, using interpolation methods. We also consider how the uncertainty that the interpolation introduces can then be communicated to the user of the system, using UncertML, a developing standard for uncertainty representation.
Resumo:
The increasing need for maintenance, repair, and overhaul (MRO) organizations to meet customers' demands in quality and reduced lead times is key to its survival within the aviation industry. Furthermore, with the unpredictability in the global market and difficulties with forecasting characteristic of the MRO industry there is an increased need for the reevaluation of the operation models of organizations within this sector. However, severe economic turmoil and ever-increasing global competition introduce the opportunity for the adoption of a resilient, tried, and tested business operation model such as 'Lean'. In order to understand this concept, its long-term viability, and its application within the aerospace MRO sector fully, this paper presents the state-of-the-art in terms of the adoption of Lean within the MRO industry by carrying out a systematic review of the literature. This paper establishes the common perception of Lean by the MRO industry and the measurable progress that has been made on the subject. Some issues and challenges are also highlighted including the misconceptions that arise from the direct transference of the perception of Lean from other industrial sectors into the aerospace MRO industry. The 'enablers and inhibitors' of Lean within the aviation industry are also discussed. This paper exposes the scarcity of the literature and the general lagging behind of the industry to the adoption of the Lean paradigm and thus highlights areas where further research is required. © 2011 Authors.
Resumo:
Considering the rapid growth of call centres (CCs) in India, its implications for businesses in the UK and a scarcity of research on human resource management (HRM) related issues in Indian CCs, this research has two main aims. First, to highlight the nature of HRM systems relevant to Indian call centres. Second, to understand the significance of internal marketing (IM) in influencing the frontline employees’ job-related attitudes and performance. Rewards being an important component of IM, the relationships between different types of rewards as part of an IM strategy, attitudes and performance of employees in Indian CCs will also be examined. Further, the research will investigate which type of commitment mediates the link between rewards and performance and why. The data collection will be via two phases. The first phase would involve a series of in-depth interviews with both the managers and employees to understand the functioning of CCs, and development of suitable HRM systems for the Indian context. The second phase would involve data collection through questionnaires distributed to the frontline employees and supervisors to examine the relationships among IM, employee attitudes and performance. Such an investigation is expected to contribute to development of better theory and practice.
Resumo:
Huge advertising budgets are invested by firms to reach and convince potential consumers to buy their products. To optimize these investments, it is fundamental not only to ensure that appropriate consumers will be reached, but also that they will be in appropriate reception conditions. Marketing research has focused on the way consumers react to advertising, as well as on some individual and contextual factors that could mediate or moderate the ad impact on consumers (e.g. motivation and ability to process information or attitudes toward advertising). Nevertheless, a factor that potentially influences consumers’ advertising reactions has not yet been studied in marketing research: fatigue. Fatigue can yet impact key variables of advertising processing, such as cognitive resources availability (Lieury 2004). Fatigue is felt when the body warns to stop an activity (or inactivity) to have some rest, allowing the individual to compensate for fatigue effects. Dittner et al. (2004) defines it as “the state of weariness following a period of exertion, mental or physical, characterized by a decreased capacity for work and reduced efficiency to respond to stimuli.’’ It signals that resources will lack if we continue with the ongoing activity. According to Schmidtke (1969), fatigue leads to troubles in information reception, in perception, in coordination, in attention getting, in concentration and in thinking. In addition, for Markle (1984) fatigue generates a decrease in memory, and in communication ability, whereas it increases time reaction, and number of errors. Thus, fatigue may have large effects on advertising processing. We suggest that fatigue determines the level of available resources. Some research about consumer responses to advertising claim that complexity is a fundamental element to take into consideration. Complexity determines the cognitive efforts the consumer must provide to understand the message (Putrevu et al. 2004). Thus, we suggest that complexity determines the level of required resources. To study this complex question about need and provision of cognitive resources, we draw upon Resource Matching Theory. Anand and Sternthal (1989, 1990) are the first to state the Resource Matching principle, saying that an ad is most persuasive when the resources required to process it match the resources the viewer is willing and able to provide. They show that when the required resources exceed those available, the message is not entirely processed by the consumer. And when there are too many available resources comparing to those required, the viewer elaborates critical or unrelated thoughts. According to the Resource Matching theory, the level of resource demanded by an ad can be high or low, and is mostly determined by the ad’s layout (Peracchio and Myers-Levy, 1997). We manipulate the level of required resources using three levels of ad complexity (low – high – extremely high). On the other side, the resource availability of an ad viewer is determined by lots of contextual and individual variables. We manipulate the level of available resources using two levels of fatigue (low – high). Tired viewers want to limit the processing effort to minimal resource requirements by making heuristics, forming overall impression at first glance. It will be easier for them to decode the message when ads are very simple. On the contrary, the most effective ads for viewers who are not tired are complex enough to draw their attention and fully use their resources. They will use more analytical strategies, looking at the details of the ad. However, if ads are too complex, they will be too difficult to understand. The viewer will be discouraged to process information and will overlook the ad. The objective of our research is to study fatigue as a moderating variable of advertising information processing. We run two experimental studies to assess the effect of fatigue on visual strategies, comprehension, persuasion and memorization. In study 1, thirty-five undergraduate students enrolled in a marketing research course participated in the experiment. The experimental design is 2 (tiredness level: between subjects) x 3 (ad complexity level: within subjects). Participants were randomly assigned a schedule time (morning: 8-10 am or evening: 10-12 pm) to perform the experiment. We chose to test subjects at various moments of the day to obtain maximum variance in their fatigue level. We use Morningness / Eveningness tendency of participants (Horne & Ostberg, 1976) as a control variable. We assess fatigue level using subjective measures - questionnaire with fatigue scales - and objective measures - reaction time and number of errors. Regarding complexity levels, we have designed our own ads in order to keep aspects other than complexity equal. We ran a pretest using the Resource Demands scale (Keller and Bloch 1997) and by rating them on complexity like Morrison and Dainoff (1972) to check for our complexity manipulation. We found three significantly different levels. After having completed the fatigue scales, participants are asked to view the ads on a screen, while their eye movements are recorded by the eye-tracker. Eye-tracking allows us to find out patterns of visual attention (Pieters and Warlop 1999). We are then able to infer specific respondents’ visual strategies according to their level of fatigue. Comprehension is assessed with a comprehension test. We collect measures of attitude change for persuasion and measures of recall and recognition at various points of time for memorization. Once the effect of fatigue will be determined across the student population, it is interesting to account for individual differences in fatigue severity and perception. Therefore, we run study 2, which is similar to the previous one except for the design: time of day is now within-subjects and complexity becomes between-subjects
Resumo:
Online communities are prime sources of information. The Web is rich with forums and Question Answering (Q&A) communities where people go to seek answers to all kinds of questions. Most systems employ manual answer-rating procedures to encourage people to provide quality answers and to help users locate the best answers in a given thread. However, in the datasets we collected from three online communities, we found that half their threads lacked best answer markings. This stresses the need for methods to assess the quality of available answers to: 1) provide automated ratings to fill in for, or support, manually assigned ones, and; 2) to assist users when browsing such answers by filtering in potential best answers. In this paper, we collected data from three online communities and converted it to RDF based on the SIOC ontology. We then explored an approach for predicting best answers using a combination of content, user, and thread features. We show how the influence of such features on predicting best answers differs across communities. Further we demonstrate how certain features unique to some of our community systems can boost predictability of best answers.
Resumo:
Background: Parkinson’s disease (PD) is an incurable neurological disease with approximately 0.3% prevalence. The hallmark symptom is gradual movement deterioration. Current scientific consensus about disease progression holds that symptoms will worsen smoothly over time unless treated. Accurate information about symptom dynamics is of critical importance to patients, caregivers, and the scientific community for the design of new treatments, clinical decision making, and individual disease management. Long-term studies characterize the typical time course of the disease as an early linear progression gradually reaching a plateau in later stages. However, symptom dynamics over durations of days to weeks remains unquantified. Currently, there is a scarcity of objective clinical information about symptom dynamics at intervals shorter than 3 months stretching over several years, but Internet-based patient self-report platforms may change this. Objective: To assess the clinical value of online self-reported PD symptom data recorded by users of the health-focused Internet social research platform PatientsLikeMe (PLM), in which patients quantify their symptoms on a regular basis on a subset of the Unified Parkinson’s Disease Ratings Scale (UPDRS). By analyzing this data, we aim for a scientific window on the nature of symptom dynamics for assessment intervals shorter than 3 months over durations of several years. Methods: Online self-reported data was validated against the gold standard Parkinson’s Disease Data and Organizing Center (PD-DOC) database, containing clinical symptom data at intervals greater than 3 months. The data were compared visually using quantile-quantile plots, and numerically using the Kolmogorov-Smirnov test. By using a simple piecewise linear trend estimation algorithm, the PLM data was smoothed to separate random fluctuations from continuous symptom dynamics. Subtracting the trends from the original data revealed random fluctuations in symptom severity. The average magnitude of fluctuations versus time since diagnosis was modeled by using a gamma generalized linear model. Results: Distributions of ages at diagnosis and UPDRS in the PLM and PD-DOC databases were broadly consistent. The PLM patients were systematically younger than the PD-DOC patients and showed increased symptom severity in the PD off state. The average fluctuation in symptoms (UPDRS Parts I and II) was 2.6 points at the time of diagnosis, rising to 5.9 points 16 years after diagnosis. This fluctuation exceeds the estimated minimal and moderate clinically important differences, respectively. Not all patients conformed to the current clinical picture of gradual, smooth changes: many patients had regimes where symptom severity varied in an unpredictable manner, or underwent large rapid changes in an otherwise more stable progression. Conclusions: This information about short-term PD symptom dynamics contributes new scientific understanding about the disease progression, currently very costly to obtain without self-administered Internet-based reporting. This understanding should have implications for the optimization of clinical trials into new treatments and for the choice of treatment decision timescales.
Resumo:
Since the initial launch of silicone hydrogel lenses, there has been a considerable broadening in the range of available commercial material properties. The very mobile silicon–oxygen bonds convey distinctive surface and mechanical properties on silicone hydrogels, in which advantages of enhanced oxygen permeability, reduced protein deposition, and modest frictional interaction are balanced by increased lipid and elastic response. There are now some 15 silicone hydrogel material variants available to practitioners; arguably, the changes that have taken place have been strongly influenced by feedback based on clinical experience. Water content is one of the most influential properties, and the decade has seen a progressive rise from lotrafilcon-A (24%) to efrofilcon-A (74%). Moduli have decreased over the same period from 1.4 to 0.3 MPa, but not solely as a result of changes in water content. Surface properties do not correlate directly with water content, and ingenious approaches have been used to achieve desirable improvements (e.g., greater lubricity and lower contact angle hysteresis). This is demonstrated by comparing the hysteresis value of the earliest (lotrafilcon-A, >40°) and most recent (delefilcon-A, <10°) coated silicone hydrogels. Although wettability is important, it is not of itself a good predictor of ocular response because this involves a much wider range of physicochemical and biochemical factors. The interference of the lens with ocular dynamics is complex leading separately to tissue–material interactions involving anterior and posterior lens surfaces. The biochemical consequences of these interactions may hold the key to a greater understanding of ocular incompatibility and end of day discomfort.
Resumo:
Type 2 diabetes is a complex, progressive endocrine and metabolical disease that typically requires substantial lifestyle changes and multiple medications to lower blood glucose, reduce cardiovascular risk and address comorbidities. Despite an extensive range of available and effective treatments, <50% of patients achieve a glycaemical target of HbA <7.0% and about two-thirds die of premature cardiovascular disease. Adherence to prescribed therapies is an important factor in the management of type 2 diabetes that is often overlooked. Inadequate adherence to oral antidiabetes agents, defined as collecting <80% of prescribed medication, is variously estimated to apply to between 36% and 93% of patients. All studies affirm that a significant proportion of type 2 diabetes patients exhibit poor adherence that will contribute to less than desired control. Identified factors that impede adherence include complex dosing regimens, clinical inertia, safety concerns, socioeconomic issues, ethnicity, patient education and beliefs, social support and polypharmacy. This review explores these factors and potential strategies to improve adherence in patients with type 2 diabetes. © 2011 Blackwell Publishing Ltd.
Resumo:
Considering the rapid growth of call centres (CCs) in India, its implications for businesses in the UK and a scarcity of research on human resource management (HRM) related issues in Indian CCs, this research has two main aims. First, to highlight the nature of HRM systems relevant to Indian call centres. Second, to understand the significance of internal marketing (IM) in influencing the frontline employees’ job-related attitudes and performance. Rewards being an important component of IM, the relationships between different types of rewards as part of an IM strategy, attitudes and performance of employees in Indian CCs will also be examined. Further, the research will investigate which type of commitment mediates the link between rewards and performance and why. The data collection will be via two phases. The first phase would involve a series of in-depth interviews with both the managers and employees to understand the functioning of CCs, and development of suitable HRM systems for the Indian context. The second phase would involve data collection through questionnaires distributed to the frontline employees and supervisors to examine the relationships among IM, employee attitudes and performance. Such an investigation is expected to contribute to development of better theory and practice.
Resumo:
Zambia and many other countries in Sub-Saharan Africa face a key challenge of sustaining high levels of coverage of AIDS treatment under prospects of dwindling global resources for HIV/AIDS treatment. Policy debate in HIV/AIDS is increasingly paying more focus to efficiency in the use of available resources. In this chapter, we apply Data Envelopment Analysis (DEA) to estimate short term technical efficiency of 34 HIV/AIDS treatment facilities in Zambia. The data consists of input variables such as human resources, medical equipment, building space, drugs, medical supplies, and other materials used in providing HIV/AIDS treatment. Two main outputs namely, numbers of ART-years (Anti-Retroviral Therapy-years) and pre-ART-years are included in the model. Results show the mean technical efficiency score to be 83%, with great variability in efficiency scores across the facilities. Scale inefficiency is also shown to be significant. About half of the facilities were on the efficiency frontier. We also construct bootstrap confidence intervals around the efficiency scores.
Resumo:
Peptides are of great therapeutic potential as vaccines and drugs. Knowledge of physicochemical descriptors, including the partition coefficient logP, is useful for the development of predictive Quantitative Structure-Activity Relationships (QSARs). We have investigated the accuracy of available programs for the prediction of logP values for peptides with known experimental values obtained from the literature. Eight prediction programs were tested, of which seven programs were fragment-based methods: XLogP, LogKow, PLogP, ACDLogP, AlogP, Interactive Analysis's LogP and MlogP; and one program used a whole molecule approach: QikProp. The predictive accuracy of the programs was assessed using r(2) values, with ALogP being the most effective (r( 2) = 0.822) and MLogP the least (r(2) = 0.090). We also examined three distinct types of peptide structure: blocked, unblocked, and cyclic. For each study (all peptides, blocked, unblocked and cyclic peptides) the performance of programs rated from best to worse is as follows: all peptides - ALogP, QikProp, PLogP, XLogP, IALogP, LogKow, ACDLogP, and MlogP; blocked peptides - PLogP, XLogP, ACDLogP, IALogP, LogKow, QikProp, ALogP, and MLogP; unblocked peptides - QikProp, IALogP, ALogP, ACDLogP, MLogP, XLogP, LogKow and PLogP; cyclic peptides - LogKow, ALogP, XLogP, MLogP, QikProp, ACDLogP, IALogP. In summary, all programs gave better predictions for blocked peptides, while, in general, logP values for cyclic peptides were under-predicted and those of unblocked peptides were over-predicted.
Resumo:
In multicriteria decision problems many values must be assigned, such as the importance of the different criteria and the values of the alternatives with respect to subjective criteria. Since these assignments are approximate, it is very important to analyze the sensitivity of results when small modifications of the assignments are made. When solving a multicriteria decision problem, it is desirable to choose a decision function that leads to a solution as stable as possible. We propose here a method based on genetic programming that produces better decision functions than the commonly used ones. The theoretical expectations are validated by case studies. © 2003 Elsevier B.V. All rights reserved.
Resumo:
Different procurement decisions taken by relief organizations can result in considerably different implications in regards to transport, storage, and distribution of humanitarian aid and ultimately can influence the performance of the humanitarian supply chain and the delivery of the humanitarian aid. In this article, we look into what resources are needed and how these resources evolve in the delivery of humanitarian aid. Drawing on the resource-based view of the firm, we develop a framework to categorize the impact of local resources on the configuration of humanitarian supply chains. In contrast to other papers, the importance of localizing the configuration of the humanitarian supply chain is not only conceptually recognized, but empirical investigations are also provided. In terms of methodology, this article is based on the analysis of secondary data from two housing reconstruction projects. Findings indicate that the use of local resources in humanitarian aid has positive effects on programs' overall supply chain performance and these effects are not only related to the macroeconomic perspective, but benefits expand to improvements related to the use of knowledge. At the same time, it was found that local sourcing often comes with a number of problems. For example, in one of the cases, significant problems existed, which were related to the scarcity of local supplies. Both housing reconstruction projects have indicated the continuous need for changes throughout the programs as a dynamic supply chain configuration is important for the long-term sustainability of reconstruction aid. © 2014 Decision Sciences Institute.