41 resultados para Multiple-scale processing
Resumo:
Huge advertising budgets are invested by firms to reach and convince potential consumers to buy their products. To optimize these investments, it is fundamental not only to ensure that appropriate consumers will be reached, but also that they will be in appropriate reception conditions. Marketing research has focused on the way consumers react to advertising, as well as on some individual and contextual factors that could mediate or moderate the ad impact on consumers (e.g. motivation and ability to process information or attitudes toward advertising). Nevertheless, a factor that potentially influences consumers’ advertising reactions has not yet been studied in marketing research: fatigue. Fatigue can yet impact key variables of advertising processing, such as cognitive resources availability (Lieury 2004). Fatigue is felt when the body warns to stop an activity (or inactivity) to have some rest, allowing the individual to compensate for fatigue effects. Dittner et al. (2004) defines it as “the state of weariness following a period of exertion, mental or physical, characterized by a decreased capacity for work and reduced efficiency to respond to stimuli.’’ It signals that resources will lack if we continue with the ongoing activity. According to Schmidtke (1969), fatigue leads to troubles in information reception, in perception, in coordination, in attention getting, in concentration and in thinking. In addition, for Markle (1984) fatigue generates a decrease in memory, and in communication ability, whereas it increases time reaction, and number of errors. Thus, fatigue may have large effects on advertising processing. We suggest that fatigue determines the level of available resources. Some research about consumer responses to advertising claim that complexity is a fundamental element to take into consideration. Complexity determines the cognitive efforts the consumer must provide to understand the message (Putrevu et al. 2004). Thus, we suggest that complexity determines the level of required resources. To study this complex question about need and provision of cognitive resources, we draw upon Resource Matching Theory. Anand and Sternthal (1989, 1990) are the first to state the Resource Matching principle, saying that an ad is most persuasive when the resources required to process it match the resources the viewer is willing and able to provide. They show that when the required resources exceed those available, the message is not entirely processed by the consumer. And when there are too many available resources comparing to those required, the viewer elaborates critical or unrelated thoughts. According to the Resource Matching theory, the level of resource demanded by an ad can be high or low, and is mostly determined by the ad’s layout (Peracchio and Myers-Levy, 1997). We manipulate the level of required resources using three levels of ad complexity (low – high – extremely high). On the other side, the resource availability of an ad viewer is determined by lots of contextual and individual variables. We manipulate the level of available resources using two levels of fatigue (low – high). Tired viewers want to limit the processing effort to minimal resource requirements by making heuristics, forming overall impression at first glance. It will be easier for them to decode the message when ads are very simple. On the contrary, the most effective ads for viewers who are not tired are complex enough to draw their attention and fully use their resources. They will use more analytical strategies, looking at the details of the ad. However, if ads are too complex, they will be too difficult to understand. The viewer will be discouraged to process information and will overlook the ad. The objective of our research is to study fatigue as a moderating variable of advertising information processing. We run two experimental studies to assess the effect of fatigue on visual strategies, comprehension, persuasion and memorization. In study 1, thirty-five undergraduate students enrolled in a marketing research course participated in the experiment. The experimental design is 2 (tiredness level: between subjects) x 3 (ad complexity level: within subjects). Participants were randomly assigned a schedule time (morning: 8-10 am or evening: 10-12 pm) to perform the experiment. We chose to test subjects at various moments of the day to obtain maximum variance in their fatigue level. We use Morningness / Eveningness tendency of participants (Horne & Ostberg, 1976) as a control variable. We assess fatigue level using subjective measures - questionnaire with fatigue scales - and objective measures - reaction time and number of errors. Regarding complexity levels, we have designed our own ads in order to keep aspects other than complexity equal. We ran a pretest using the Resource Demands scale (Keller and Bloch 1997) and by rating them on complexity like Morrison and Dainoff (1972) to check for our complexity manipulation. We found three significantly different levels. After having completed the fatigue scales, participants are asked to view the ads on a screen, while their eye movements are recorded by the eye-tracker. Eye-tracking allows us to find out patterns of visual attention (Pieters and Warlop 1999). We are then able to infer specific respondents’ visual strategies according to their level of fatigue. Comprehension is assessed with a comprehension test. We collect measures of attitude change for persuasion and measures of recall and recognition at various points of time for memorization. Once the effect of fatigue will be determined across the student population, it is interesting to account for individual differences in fatigue severity and perception. Therefore, we run study 2, which is similar to the previous one except for the design: time of day is now within-subjects and complexity becomes between-subjects
Resumo:
Multi-agent systems are complex systems comprised of multiple intelligent agents that act either independently or in cooperation with one another. Agent-based modelling is a method for studying complex systems like economies, societies, ecologies etc. Due to their complexity, very often mathematical analysis is limited in its ability to analyse such systems. In this case, agent-based modelling offers a practical, constructive method of analysis. The objective of this book is to shed light on some emergent properties of multi-agent systems. The authors focus their investigation on the effect of knowledge exchange on the convergence of complex, multi-agent systems.
Resumo:
Forests play a pivotal role in timber production, maintenance and development of biodiversity and in carbon sequestration and storage in the context of the Kyoto Protocol. Policy makers and forest experts therefore require reliable information on forest extent, type and change for management, planning and modeling purposes. It is becoming increasingly clear that such forest information is frequently inconsistent and unharmonised between countries and continents. This research paper presents a forest information portal that has been developed in line with the GEOSS and INSPIRE frameworks. The web portal provides access to forest resources data at a variety of spatial scales, from global through to regional and local, as well as providing analytical capabilities for monitoring and validating forest change. The system also allows for the utilisation of forest data and processing services within other thematic areas. The web portal has been developed using open standards to facilitate accessibility, interoperability and data transfer.
Resumo:
Remote sensing data is routinely used in ecology to investigate the relationship between landscape pattern as characterised by land use and land cover maps, and ecological processes. Multiple factors related to the representation of geographic phenomenon have been shown to affect characterisation of landscape pattern resulting in spatial uncertainty. This study investigated the effect of the interaction between landscape spatial pattern and geospatial processing methods statistically; unlike most papers which consider the effect of each factor in isolation only. This is important since data used to calculate landscape metrics typically undergo a series of data abstraction processing tasks and are rarely performed in isolation. The geospatial processing methods tested were the aggregation method and the choice of pixel size used to aggregate data. These were compared to two components of landscape pattern, spatial heterogeneity and the proportion of landcover class area. The interactions and their effect on the final landcover map were described using landscape metrics to measure landscape pattern and classification accuracy (response variables). All landscape metrics and classification accuracy were shown to be affected by both landscape pattern and by processing methods. Large variability in the response of those variables and interactions between the explanatory variables were observed. However, even though interactions occurred, this only affected the magnitude of the difference in landscape metric values. Thus, provided that the same processing methods are used, landscapes should retain their ranking when their landscape metrics are compared. For example, highly fragmented landscapes will always have larger values for the landscape metric "number of patches" than less fragmented landscapes. But the magnitude of difference between the landscapes may change and therefore absolute values of landscape metrics may need to be interpreted with caution. The explanatory variables which had the largest effects were spatial heterogeneity and pixel size. These explanatory variables tended to result in large main effects and large interactions. The high variability in the response variables and the interaction of the explanatory variables indicate it would be difficult to make generalisations about the impact of processing on landscape pattern as only two processing methods were tested and it is likely that untested processing methods will potentially result in even greater spatial uncertainty. © 2013 Elsevier B.V.
Resumo:
In a Data Envelopment Analysis model, some of the weights used to compute the efficiency of a unit can have zero or negligible value despite of the importance of the corresponding input or output. This paper offers an approach to preventing inputs and outputs from being ignored in the DEA assessment under the multiple input and output VRS environment, building on an approach introduced in Allen and Thanassoulis (2004) for single input multiple output CRS cases. The proposed method is based on the idea of introducing unobserved DMUs created by adjusting input and output levels of certain observed relatively efficient DMUs, in a manner which reflects a combination of technical information and the decision maker's value judgements. In contrast to many alternative techniques used to constrain weights and/or improve envelopment in DEA, this approach allows one to impose local information on production trade-offs, which are in line with the general VRS technology. The suggested procedure is illustrated using real data. © 2011 Elsevier B.V. All rights reserved.
Resumo:
A series of novel block copolymers, processable from single organic solvents and subsequently rendered amphiphilic by thermolysis, have been synthesized using Grignard metathesis (GRIM) and reversible addition-fragmentation chain transfer (RAFT) polymerizations and azide-alkyne click chemistry. This chemistry is simple and allows the fabrication of well-defined block copolymers with controllable block lengths. The block copolymers, designed for use as interfacial adhesive layers in organic photovoltaics to enhance contact between the photoactive and hole transport layers, comprise printable poly(3-hexylthiophene)-block-poly(neopentyl p-styrenesulfonate), P3HT-b-PNSS. Subsequently, they are converted to P3HT-b-poly(p-styrenesulfonate), P3HT-b-PSS, following deposition and thermal treatment at 150 °C. Grazing incidence small- and wide-angle X-ray scattering (GISAXS/GIWAXS) revealed that thin films of the amphiphilic block copolymers comprise lamellar nanodomains of P3HT crystallites that can be pushed further apart by increasing the PSS block lengths. The approach of using a thermally modifiable block allows deposition of this copolymer from a single organic solvent and subsequent conversion to an amphiphilic layer by nonchemical means, particularly attractive to large scale roll-to-roll industrial printing processes.
Resumo:
Three studies tested the impact of properties of behavioral intention on intention-behavior consistency, information processing, and resistance. Principal components analysis showed that properties of intention formed distinct factors. Study 1 demonstrated that temporal stability, but not the other intention attributes, moderated intention-behavior consistency. Study 2 found that greater stability of intention was associated with improved memory performance. In Study 3, participants were confronted with a rating scale manipulation designed to alter their intention scores. Findings showed that stable intentions were able to withstand attack. Overall, the present research findings suggest that different properties of intention are not simply manifestations of a single underlying construct ("intention strength"), and that temporal stability exhibits superior resistance and impact compared to other intention attributes. © 2013 Wiley Periodicals, Inc.
Resumo:
The Multiple Pheromone Ant Clustering Algorithm (MPACA) models the collective behaviour of ants to find clusters in data and to assign objects to the most appropriate class. It is an ant colony optimisation approach that uses pheromones to mark paths linking objects that are similar and potentially members of the same cluster or class. Its novelty is in the way it uses separate pheromones for each descriptive attribute of the object rather than a single pheromone representing the whole object. Ants that encounter other ants frequently enough can combine the attribute values they are detecting, which enables the MPACA to learn influential variable interactions. This paper applies the model to real-world data from two domains. One is logistics, focusing on resource allocation rather than the more traditional vehicle-routing problem. The other is mental-health risk assessment. The task for the MPACA in each domain was to predict class membership where the classes for the logistics domain were the levels of demand on haulage company resources and the mental-health classes were levels of suicide risk. Results on these noisy real-world data were promising, demonstrating the ability of the MPACA to find patterns in the data with accuracy comparable to more traditional linear regression models. © 2013 Polish Information Processing Society.
Resumo:
Clinical Decision Support Systems (CDSSs) need to disseminate expertise in formats that suit different end users and with functionality tuned to the context of assessment. This paper reports research into a method for designing and implementing knowledge structures that facilitate the required flexibility. A psychological model of expertise is represented using a series of formally specified and linked XML trees that capture increasing elements of the model, starting with hierarchical structuring, incorporating reasoning with uncertainty, and ending with delivering the final CDSS. The method was applied to the Galatean Risk and Safety Tool, GRiST, which is a web-based clinical decision support system (www.egrist.org) for assessing mental-health risks. Results of its clinical implementation demonstrate that the method can produce a system that is able to deliver expertise targetted and formatted for specific patient groups, different clinical disciplines, and alternative assessment settings. The approach may be useful for developing other real-world systems using human expertise and is currently being applied to a logistics domain. © 2013 Polish Information Processing Society.
Resumo:
Purpose: The paper aims to design and prove the concept of micro-industry using trigeneration fuelled by biomass, for sustainable development in rural NW India. Design/methodology/approach: This is being tested at village Malunga, near Jodhpur in Rajasthan. The system components comprise burning of waste biomass for steam generation and its use for power generation, cooling system for fruit ripening and the use of steam for producing distilled water. Site was selected taking into account the local economic and social needs, biomass resources available from agricultural activities, and the presence of a NGO which is competent to facilitate running of the enterprise. The trigeneration system was designed to integrate off-the-shelf equipment for power generation using boilers of approximate total capacity 1 tonne of fuel per hour, and a back-pressure steam turbo-generator (200 kW). Cooling is provided by a vapour absorption machine (VAM). Findings: The financial analysis indicates a payback time of less than two years. Nevertheless, this is sensitive to market fluctuations and availabilities of raw materials. Originality/value: Although comparable trigeneration systems already exist in large food processing industries and in space heating and cooling applications, they have not previously been used for rural micro-industry. The small-scale (1-2 m3/h output) multiple effect distillation (3 effect plus condenser) unit has not previously been deployed at field level. © Emerald Group Publishing Limited.
Resumo:
Objective: To independently evaluate the impact of the second phase of the Health Foundation's Safer Patients Initiative (SPI2) on a range of patient safety measures. Design: A controlled before and after design. Five substudies: survey of staff attitudes; review of case notes from high risk (respiratory) patients in medical wards; review of case notes from surgical patients; indirect evaluation of hand hygiene by measuring hospital use of handwashing materials; measurement of outcomes (adverse events, mortality among high risk patients admitted to medical wards, patients' satisfaction, mortality in intensive care, rates of hospital acquired infection). Setting: NHS hospitals in England. Participants: Nine hospitals participating in SPI2 and nine matched control hospitals. Intervention The SPI2 intervention was similar to the SPI1, with somewhat modified goals, a slightly longer intervention period, and a smaller budget per hospital. Results: One of the scores (organisational climate) showed a significant (P=0.009) difference in rate of change over time, which favoured the control hospitals, though the difference was only 0.07 points on a five point scale. Results of the explicit case note reviews of high risk medical patients showed that certain practices improved over time in both control and SPI2 hospitals (and none deteriorated), but there were no significant differences between control and SPI2 hospitals. Monitoring of vital signs improved across control and SPI2 sites. This temporal effect was significant for monitoring the respiratory rate at both the six hour (adjusted odds ratio 2.1, 99% confidence interval 1.0 to 4.3; P=0.010) and 12 hour (2.4, 1.1 to 5.0; P=0.002) periods after admission. There was no significant effect of SPI for any of the measures of vital signs. Use of a recommended system for scoring the severity of pneumonia improved from 1.9% (1/52) to 21.4% (12/56) of control and from 2.0% (1/50) to 41.7% (25/60) of SPI2 patients. This temporal change was significant (7.3, 1.4 to 37.7; P=0.002), but the difference in difference was not significant (2.1, 0.4 to 11.1; P=0.236). There were no notable or significant changes in the pattern of prescribing errors, either over time or between control and SPI2 hospitals. Two items of medical history taking (exercise tolerance and occupation) showed significant improvement over time, across both control and SPI2 hospitals, but no additional SPI2 effect. The holistic review showed no significant changes in error rates either over time or between control and SPI2 hospitals. The explicit case note review of perioperative care showed that adherence rates for two of the four perioperative standards targeted by SPI2 were already good at baseline, exceeding 94% for antibiotic prophylaxis and 98% for deep vein thrombosis prophylaxis. Intraoperative monitoring of temperature improved over time in both groups, but this was not significant (1.8, 0.4 to 7.6; P=0.279), and there were no additional effects of SPI2. A dramatic rise in consumption of soap and alcohol hand rub was similar in control and SPI2 hospitals (P=0.760 and P=0.889, respectively), as was the corresponding decrease in rates of Clostridium difficile and meticillin resistant Staphylococcus aureus infection (P=0.652 and P=0.693, respectively). Mortality rates of medical patients included in the case note reviews in control hospitals increased from 17.3% (42/243) to 21.4% (24/112), while in SPI2 hospitals they fell from 10.3% (24/233) to 6.1% (7/114) (P=0.043). Fewer than 8% of deaths were classed as avoidable; changes in proportions could not explain the divergence of overall death rates between control and SPI2 hospitals. There was no significant difference in the rate of change in mortality in intensive care. Patients' satisfaction improved in both control and SPI2 hospitals on all dimensions, but again there were no significant changes between the two groups of hospitals. Conclusions: Many aspects of care are already good or improving across the NHS in England, suggesting considerable improvements in quality across the board. These improvements are probably due to contemporaneous policy activities relating to patient safety, including those with features similar to the SPI, and the emergence of professional consensus on some clinical processes. This phenomenon might have attenuated the incremental effect of the SPI, making it difficult to detect. Alternatively, the full impact of the SPI might be observable only in the longer term. The conclusion of this study could have been different if concurrent controls had not been used.
Resumo:
Objectives: To conduct an independent evaluation of the first phase of the Health Foundation's Safer Patients Initiative (SPI), and to identify the net additional effect of SPI and any differences in changes in participating and non-participating NHS hospitals. Design: Mixed method evaluation involving five substudies, before and after design. Setting: NHS hospitals in United Kingdom. Participants: Four hospitals (one in each country in the UK) participating in the first phase of the SPI (SPI1); 18 control hospitals. Intervention: The SPI1 was a compound (multicomponent) organisational intervention delivered over 18 months that focused on improving the reliability of specific frontline care processes in designated clinical specialties and promoting organisational and cultural change. Results: Senior staff members were knowledgeable and enthusiastic about SPI1. There was a small (0.08 points on a 5 point scale) but significant (P<0.01) effect in favour of the SPI1 hospitals in one of 11 dimensions of the staff questionnaire (organisational climate). Qualitative evidence showed only modest penetration of SPI1 at medical ward level. Although SPI1 was designed to engage staff from the bottom up, it did not usually feel like this to those working on the wards, and questions about legitimacy of some aspects of SPI1 were raised. Of the five components to identify patients at risk of deterioration - monitoring of vital signs (14 items); routine tests (three items); evidence based standards specific to certain diseases (three items); prescribing errors (multiple items from the British National Formulary); and medical history taking (11 items) - there was little net difference between control and SPI1 hospitals, except in relation to quality of monitoring of acute medical patients, which improved on average over time across all hospitals. Recording of respiratory rate increased to a greater degree in SPI1 than in control hospitals; in the second six hours after admission recording increased from 40% (93) to 69% (165) in control hospitals and from 37% (141) to 78% (296) in SPI1 hospitals (odds ratio for "difference in difference" 2.1, 99% confidence interval 1.0 to 4.3; P=0.008). Use of a formal scoring system for patients with pneumonia also increased over time (from 2% (102) to 23% (111) in control hospitals and from 2% (170) to 9% (189) in SPI1 hospitals), which favoured controls and was not significant (0.3, 0.02 to 3.4; P=0.173). There were no improvements in the proportion of prescription errors and no effects that could be attributed to SPI1 in non-targeted generic areas (such as enhanced safety culture). On some measures, the lack of effect could be because compliance was already high at baseline (such as use of steroids in over 85% of cases where indicated), but even when there was more room for improvement (such as in quality of medical history taking), there was no significant additional net effect of SPI1. There were no changes over time or between control and SPI1 hospitals in errors or rates of adverse events in patients in medical wards. Mortality increased from 11% (27) to 16% (39) among controls and decreased from17%(63) to13%(49) among SPI1 hospitals, but the risk adjusted difference was not significant (0.5, 0.2 to 1.4; P=0.085). Poor care was a contributing factor in four of the 178 deaths identified by review of case notes. The survey of patients showed no significant differences apart from an increase in perception of cleanliness in favour of SPI1 hospitals. Conclusions The introduction of SPI1 was associated with improvements in one of the types of clinical process studied (monitoring of vital signs) and one measure of staff perceptions of organisational climate. There was no additional effect of SPI1 on other targeted issues nor on other measures of generic organisational strengthening.
Resumo:
The present report reviews behavioural, electroencephalographic, and especially magnetoencephalographic findings on the cortical mechanisms underlying attentional processes that separate targets from distractors and that ensure durable target representations for goal-directed action. A common way of investigation is to observe the system’s overt and covert behaviour when capacity limitations are reached. Here we focus on the aspect of temporally enhanced processing load, namely on performance deficits occurring under rapid-serial-visual-presentation (RSVP) conditions. The most prominent of these deficits is the so-called “attentional blink” (AB) effect. We first report MEG findings with respect to the time course of activation that shows modulations around 300 ms after target onset which reflect demands and success of target consolidation. Then, findings regarding long-range inter-area phase synchronization are reported that are hypothesized to mediate communication within the attentional network. Changes in synchronization reflect changes in the attentional demands of the task and are directly related to behavioural performance. Furthermore, enhanced vigilance of the system elicits systematically increased synchronization indices. A hypothetical framework is sketched out that aims at explaining limitations in multiple target consolidation under RSVP conditions.
Resumo:
Detection thresholds for two visual- and two auditory-processing tasks were obtained for 73 children and young adults who varied broadly in reading ability. A reading-disabled subgroup had significantly higher thresholds than a normal-reading subgroup for the auditory tasks only. When analyzed across the whole group, the auditory tasks and one of the visual tasks, coherent motion detection, were significantly related to word reading. These effects were largely independent of ADHD ratings; however, none of these measures accounted for significant variance in word reading after controlling for full-scale IQ. In contrast, phoneme awareness, rapid naming, and nonword repetition each explained substantial, significant word reading variance after controlling for IQ, suggesting more specific roles for these oral language skills in the development of word reading. © 2004 Elsevier Inc. All rights reserved.
Resumo:
The accurate in silico identification of T-cell epitopes is a critical step in the development of peptide-based vaccines, reagents, and diagnostics. It has a direct impact on the success of subsequent experimental work. Epitopes arise as a consequence of complex proteolytic processing within the cell. Prior to being recognized by T cells, an epitope is presented on the cell surface as a complex with a major histocompatibility complex (MHC) protein. A prerequisite therefore for T-cell recognition is that an epitope is also a good MHC binder. Thus, T-cell epitope prediction overlaps strongly with the prediction of MHC binding. In the present study, we compare discriminant analysis and multiple linear regression as algorithmic engines for the definition of quantitative matrices for binding affinity prediction. We apply these methods to peptides which bind the well-studied human MHC allele HLA-A*0201. A matrix which results from combining results of the two methods proved powerfully predictive under cross-validation. The new matrix was also tested on an external set of 160 binders to HLA-A*0201; it was able to recognize 135 (84%) of them.