277 resultados para message acceptance
Resumo:
There is a growing demand for sustainable retirement villages in Australia due to an increasing number of ageing population and public acceptance of sustainable development. This research aims to gain a better understanding of retirees’ understanding about sustainable retirement living and their attitudes towards sustainable developments via a questionnaire survey approach. The results showed that the current and potential residents of retirement villages are generally very conscious of unsustainable resource consumption and would like their residences and community to be more environmentally friendly and energy efficient. The cost of energy supply is a concern to majority of respondents. Education is required to residents about recycling household waste and how to use available facilities. A better understanding of retirees’ awareness and attitudes towards sustainability issues helps to improve the sustainable developments of retirement villages in the future.
Resumo:
There is a growing demand for sustainable retirement villages in Australia due to an increasing number of ageing population and public acceptance of sustainable development. This research aims to gain a better understanding of retirees’ understanding about sustainable retirement living and their attitudes towards sustainable developments via a questionnaire survey approach. The results showed that the current and potential residents of retirement villages are generally very conscious of unsustainable resource consumption and would like their residences and community to be more environmentally friendly and energy efficient. The cost of energy supply is a concern to majority of respondents. Education is required to residents about recycling household waste and how to use available facilities. A better understanding of retirees’ awareness and attitudes towards sustainability issues helps to improve the sustainable developments of retirement villages in the future.
Resumo:
Traditional treatments for weight management have focussed on prescribed dietary restriction or regular exercise, or a combination of both. However recidivism for such prescribed treatments remains high, particularly among the overweight and obese. The aim of this thesis was to investigate voluntary dietary changes in the presence of prescribed mixed-mode exercise, conducted over 16 weeks. With the implementation of a single lifestyle change (exercise) it was postulated that the onerous burden of concomitant dietary and exercise compliance would be reduced, leading to voluntary lifestyle changes in such areas as diet. In addition, the failure of exercise as a single weight loss treatment has been reported to be due to compensatory energy intakes, although much of the evidence is from acute exercise studies, necessitating investigation of compensatory intakes during a long-term exercise intervention. Following 16 weeks of moderate intensity exercise, 30 overweight and obese (BMI≥25.00 kg.m-2) men and women showed small but statistically significant decreases in mean dietary fat intakes, without compensatory increases in other macronutrient or total energy intakes. Indeed total energy intakes were significantly lower for men and women following the exercise intervention, due to the decreases in dietary fat intakes. There was a risk that acceptance of the statistical validity of the small changes to dietary fat intakes may have constituted a Type 1 error, with false rejection of the Null hypothesis. Oro-sensory perceptions to changes in fat loads were therefore investigated to determine whether the measured dietary fat changes were detectable by the human palate. The ability to detect small changes in dietary fat provides sensory feedback for self-initiated dietary changes, but lean and overweight participants were unable to distinguish changes to fat loads of similar magnitudes to that measured in the exercise intervention study. Accuracy of the dietary measurement instrument was improved with the effects of random error (day-to-day variability) minimised with the use of a statistically validated 8-day, multiple-pass, 24 hour dietary recall instrument. However systematic error (underreporting) may have masked the magnitude of dietary change, particularly the reduction in dietary fat intakes. A purported biomarker (plasma Apolipoprotein A-IV) (apoA-IV) was subsequently investigated, to monitor systematic error in self-reported dietary intakes. Changes in plasma apoA-IV concentrations were directly correlated with increased and decreased changes to dietary fat intakes, suggesting that this objective marker may be a useful tool to improve the accuracy of dietary measurement in overweight and obese populations, who are susceptible to dietary underreporting.
Resumo:
We derive a semianalytical model to describe the interaction of a single photon emitter and a collection of arbitrarily shaped metal nanoparticles. The theory treats the metal nanoparticles classically within the electrostatic eigenmode method, wherein the surface plasmon resonances of collections of nanoparticles are represented by the hybridization of the plasmon modes of the noninteracting particles. The single photon emitter is represented by a quantum mechanical two-level system that exhibits line broadening due to a finite spontaneous decay rate. Plasmon-emitter coupling is described by solving the resulting Bloch equations. We illustrate the theory by studying model systems consisting of a single emitter coupled to one, two, and three nanoparticles, and we also compare the predictions of our model to published experimental data. ©2012 American Physical Society.
Resumo:
The rate of singlet-to-triplet intersystem crossing in 1,4-didehydrobenzene (the biradical produced as a reactive intermediate in the thermal cycloaromatization of enediynes), cannot be increased by the application of an external magnetic field. The rate of product formation and the distribution of stable products of 2,3-di-n-propyl-1,4-didehydrobenzene thermolysis is unchanged at magnetic flux densities in the range 0–2000 G and at 66 000 G. Similarly, the rate of thermolysis of an unsymmetrical enediyne is insensitive to magnetic field flux in the same range. This finding precludes the modulation of enediyne reaction rates in pharmaceutical and synthetic pursuits.
Resumo:
Traditional analytic models for power system fault diagnosis are usually formulated as an unconstrained 0–1 integer programming problem. The key issue of the models is to seek the fault hypothesis that minimizes the discrepancy between the actual and the expected states of the concerned protective relays and circuit breakers. The temporal information of alarm messages has not been well utilized in these methods, and as a result, the diagnosis results may be not unique and hence indefinite, especially when complicated and multiple faults occur. In order to solve this problem, this paper presents a novel analytic model employing the temporal information of alarm messages along with the concept of related path. The temporal relationship among the actions of protective relays and circuit breakers, and the different protection configurations in a modern power system can be reasonably represented by the developed model, and therefore, the diagnosed results will be more definite under different circumstances of faults. Finally, an actual power system fault was served to verify the proposed method.
Resumo:
Cloud computing has emerged as a major ICT trend and has been acknowledged as a key theme of industry by prominent ICT organisations. However, one of the major challenges that face the cloud computing concept and its global acceptance is how to secure and protect the data that is the property of the user. The geographic location of cloud data storage centres is an important issue for many organisations and individuals due to the regulations and laws that require data and operations to reside in specific geographic locations. Thus, data owners may need to ensure that their cloud providers do not compromise the SLA contract and move their data into another geographic location. This paper introduces an architecture for a new approach for geographic location assurance, which combines the proof of storage protocol (POS) and the distance-bounding protocol. This allows the client to check where their stored data is located, without relying on the word of the cloud provider. This architecture aims to achieve better security and more flexible geographic assurance within the environment of cloud computing.
Resumo:
Limited research is available on how well visual cues integrate with auditory cues to improve speech intelligibility in persons with visual impairments, such as cataracts. We investigated whether simulated cataracts interfered with participants’ ability to use visual cues to help disambiguate a spoken message in the presence of spoken background noise. We tested 21 young adults with normal visual acuity and hearing sensitivity. Speech intelligibility was tested under three conditions: auditory only with no visual input, auditory-visual with normal viewing, and auditory-visual with simulated cataracts. Central Institute for the Deaf (CID) Everyday Speech Sentences were spoken by a live talker, mimicking a pre-recorded audio track, in the presence of pre-recorded four-person background babble at a signal-to-noise ratio (SNR) of -13 dB. The talker was masked to the experimental conditions to control for experimenter bias. Relative to the normal vision condition, speech intelligibility was significantly poorer, [t (20) = 4.17, p < .01, Cohen’s d =1.0], in the simulated cataract condition. These results suggest that cataracts can interfere with speech perception, which may occur through a reduction in visual cues, less effective integration or a combination of the two effects. These novel findings contribute to our understanding of the association between two common sensory problems in adults: reduced contrast sensitivity associated with cataracts and reduced face-to-face communication in noise.
Resumo:
Most unsignalised intersection capacity calculation procedures are based on gap acceptance models. Accuracy of critical gap estimation affects accuracy of capacity and delay estimation. Several methods have been published to estimate drivers’ sample mean critical gap, the Maximum Likelihood Estimation (MLE) technique regarded as the most accurate. This study assesses three novel methods; Average Central Gap (ACG) method, Strength Weighted Central Gap method (SWCG), and Mode Central Gap method (MCG), against MLE for their fidelity in rendering true sample mean critical gaps. A Monte Carlo event based simulation model was used to draw the maximum rejected gap and accepted gap for each of a sample of 300 drivers across 32 simulation runs. Simulation mean critical gap is varied between 3s and 8s, while offered gap rate is varied between 0.05veh/s and 0.55veh/s. This study affirms that MLE provides a close to perfect fit to simulation mean critical gaps across a broad range of conditions. The MCG method also provides an almost perfect fit and has superior computational simplicity and efficiency to the MLE. The SWCG method performs robustly under high flows; however, poorly under low to moderate flows. Further research is recommended using field traffic data, under a variety of minor stream and major stream flow conditions for a variety of minor stream movement types, to compare critical gap estimates using MLE against MCG. Should the MCG method prove as robust as MLE, serious consideration should be given to its adoption to estimate critical gap parameters in guidelines.
Resumo:
Quality oriented management systems and methods have become the dominant business and governance paradigm. From this perspective, satisfying customers’ expectations by supplying reliable, good quality products and services is the key factor for an organization and even government. During recent decades, Statistical Quality Control (SQC) methods have been developed as the technical core of quality management and continuous improvement philosophy and now are being applied widely to improve the quality of products and services in industrial and business sectors. Recently SQC tools, in particular quality control charts, have been used in healthcare surveillance. In some cases, these tools have been modified and developed to better suit the health sector characteristics and needs. It seems that some of the work in the healthcare area has evolved independently of the development of industrial statistical process control methods. Therefore analysing and comparing paradigms and the characteristics of quality control charts and techniques across the different sectors presents some opportunities for transferring knowledge and future development in each sectors. Meanwhile considering capabilities of Bayesian approach particularly Bayesian hierarchical models and computational techniques in which all uncertainty are expressed as a structure of probability, facilitates decision making and cost-effectiveness analyses. Therefore, this research investigates the use of quality improvement cycle in a health vii setting using clinical data from a hospital. The need of clinical data for monitoring purposes is investigated in two aspects. A framework and appropriate tools from the industrial context are proposed and applied to evaluate and improve data quality in available datasets and data flow; then a data capturing algorithm using Bayesian decision making methods is developed to determine economical sample size for statistical analyses within the quality improvement cycle. Following ensuring clinical data quality, some characteristics of control charts in the health context including the necessity of monitoring attribute data and correlated quality characteristics are considered. To this end, multivariate control charts from an industrial context are adapted to monitor radiation delivered to patients undergoing diagnostic coronary angiogram and various risk-adjusted control charts are constructed and investigated in monitoring binary outcomes of clinical interventions as well as postintervention survival time. Meanwhile, adoption of a Bayesian approach is proposed as a new framework in estimation of change point following control chart’s signal. This estimate aims to facilitate root causes efforts in quality improvement cycle since it cuts the search for the potential causes of detected changes to a tighter time-frame prior to the signal. This approach enables us to obtain highly informative estimates for change point parameters since probability distribution based results are obtained. Using Bayesian hierarchical models and Markov chain Monte Carlo computational methods, Bayesian estimators of the time and the magnitude of various change scenarios including step change, linear trend and multiple change in a Poisson process are developed and investigated. The benefits of change point investigation is revisited and promoted in monitoring hospital outcomes where the developed Bayesian estimator reports the true time of the shifts, compared to priori known causes, detected by control charts in monitoring rate of excess usage of blood products and major adverse events during and after cardiac surgery in a local hospital. The development of the Bayesian change point estimators are then followed in a healthcare surveillances for processes in which pre-intervention characteristics of patients are viii affecting the outcomes. In this setting, at first, the Bayesian estimator is extended to capture the patient mix, covariates, through risk models underlying risk-adjusted control charts. Variations of the estimator are developed to estimate the true time of step changes and linear trends in odds ratio of intensive care unit outcomes in a local hospital. Secondly, the Bayesian estimator is extended to identify the time of a shift in mean survival time after a clinical intervention which is being monitored by riskadjusted survival time control charts. In this context, the survival time after a clinical intervention is also affected by patient mix and the survival function is constructed using survival prediction model. The simulation study undertaken in each research component and obtained results highly recommend the developed Bayesian estimators as a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances as well as industrial and business contexts. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The empirical results and simulations indicate that the Bayesian estimators are a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The advantages of the Bayesian approach seen in general context of quality control may also be extended in the industrial and business domains where quality monitoring was initially developed.
Resumo:
For decades the prevailing idea in B2B marketing has been that buyers are motivated by product/service specifications. Sellers are put on approved supplier lists, invited to respond to RFPs, and are selected on the basis of superior products, at the right price, delivered on time. The history of B2B advertising is filled with the advice “provide product specifications” and your advertising will be noticed, lead to sales inquiries, and eventually result in higher sales. Advertising filled with abstractions might work in the B2C market, but the B2B marketplace is about being literal. What we know about advertising — and particularly the message component of advertising — is based on a combination of experience, unproven ideas and a bit of social science. Over the years, advertising guidelines produced by the predecessors of BMA (National Industrial Advertising Association, Association of Industrial Advertising, and the Business/Professional Advertising Association) stressed emphasizing product features and tangible benefits. The major publishers of B2B magazines, e.g., McGraw-Hill, Penton Publishing, et al. had similar recommendations. Also, B2B marketing books recommend advertising that focuses on specific product features (Kotler and Pfoertsch, 2006; Lamons, 2005). In more recent times, abstraction in advertising messages has penetrated the B2B marketplace. Even though such advertising legends as David Ogilvy (1963, 1985) frequently recommended advertising based on hard-core information, we’ve seen the growing use of emotional appeals, including humor, fear, parental affection, etc. Beyond the use of emotion, marketers attempt to build a stronger connection between their brands and buyers through the use of abstraction and symbolism. Below are two examples of B2B advertisements — Figure 1A is high in literalism and Figure 1B is high in symbolism. Which approach — a “left-brain” (literal) or “right brain” (symbolic) is more effective in B2B advertising? Are the advertising message creation guidelines from the history of B2B advertising accurate? Are the foundations of B2B message creation (experience and unproven ideas) sound?
Resumo:
A basic element in advertising strategy is the choice of an appeal. Many researchers have studied communication message form and specifically forms of literalism and symbolism, or some variation. The motives for such study are grounded in increasing the effectiveness of commercial communication messages, especially advertising messages. Advertising research studies typically use forms of literalism (e.g. informativeness) or symbolism (e.g. metaphoric, tropes, schemes figures of speech, and rhetorical figures) as independent variables and compare these against one or more of the traditional advertising effectiveness measures as dependent variable(s). The main challenge in assessing the effectiveness of literalism or symbolism in message content is the discreet identification of the construct. However, no standard, empirically-tested measure was located in the literature.
Resumo:
Normative influences on road user behaviour have been well documented and include such things as personal, group, subjective and moral norms. Commonly, normative factors are examined within one cultural context, although a few examples of exploring the issue across cultures exist. Such examples add to our understanding of differences in perceptions of the normative factors that may exert influence on road users and can assist in determining whether successful road safety interventions in one location may be successful in another. Notably, the literature is relatively silent on such influences in countries experiencing rapidly escalating rates of motorization. China is one such country where new drivers are taking to the roads in unprecedented numbers and authorities are grappling with the associated challenges. This paper presents results from qualitative and quantitative research on self-reported driving speeds of car drivers and related issues in Australia and China. Focus group interviews and questionnaires conducted in each country examined normative factors that might influence driving in each cultural context. Qualitative findings indicated perceptions of community acceptance of speeding were present in both countries but appeared more widespread in China, yet quantitative results did not support this difference. Similarly, with regard to negative social feedback from speeding, qualitative findings suggested no embarrassment associated with speeding among Chinese participants and mixed results among Australian participants, yet quantitative results indicated greater embarrassment for Chinese drivers. This issue was also examined from the perspective of self-identity and findings were generally similar across both samples and appear related to whether it is important to be perceived as a skilled/safe driver by others. An interesting and important finding emerged with regard to how Chinese drivers may respond to questions about road safety issues if the answers might influence foreigners’ perceptions of China. In attempting to assess community norms associated with speeding, participants were asked to describe what they would tell a foreign visitor about the prevalence of speeding in China. Responses indicated that if asked by a foreigner, people may answer in a manner that portrayed China as a safe country (e.g., that drivers do not speed), irrespective of the actual situation. This ‘faking good for foreigners’ phenomenon highlights the importance of considering ‘face’ when conducting research in China – a concept absent from the road safety literature. An additional noteworthy finding that has been briefly described in the road safety literature is the importance and strength of the normative influence of social networks (guanxi) in China. The use of personal networks to assist in avoiding penalties for traffic violations was described by Chinese participants and is an area that could be addressed to strengthen the deterrent effect of traffic law enforcement. Overall, the findings suggest important considerations for developing and implementing road safety countermeasures in different cultural contexts.
Resumo:
New substation automation applications, such as sampled value process buses and synchrophasors, require sampling accuracy of 1 µs or better. The Precision Time Protocol (PTP), IEEE Std 1588, achieves this level of performance and integrates well into Ethernet based substation networks. This paper takes a systematic approach to the performance evaluation of commercially available PTP devices (grandmaster, slave, transparent and boundary clocks) from a variety of manufacturers. The ``error budget'' is set by the performance requirements of each application. The ``expenditure'' of this error budget by each component is valuable information for a system designer. The component information is used to design a synchronization system that meets the overall functional requirements. The quantitative performance data presented shows that this testing is effective and informative. Results from testing PTP performance in the presence of sampled value process bus traffic demonstrate the benefit of a ``bottom up'' component testing approach combined with ``top down'' system verification tests. A test method that uses a precision Ethernet capture card, rather than dedicated PTP test sets, to determine the Correction Field Error of transparent clocks is presented. This test is particularly relevant for highly loaded Ethernet networks with stringent timing requirements. The methods presented can be used for development purposes by manufacturers, or by system integrators for acceptance testing. A sampled value process bus was used as the test application for the systematic approach described in this paper. The test approach was applied, components were selected, and the system performance verified to meet the application's requirements. Systematic testing, as presented in this paper, is applicable to a range of industries that use, rather than develop, PTP for time transfer.
Resumo:
Safety at Railway Level Crossings (RLXs) is an important issue within the Australian transport system. Crashes at RLXs involving road vehicles in Australia are estimated to cost $10 million each year. Such crashes are mainly due to human factors; unintentional errors contribute to 46% of all fatal collisions and are far more common than deliberate violations. This suggests that innovative intervention targeting drivers are particularly promising to improve RLX safety. In recent years there has been a rapid development of a variety of affordable technologies which can be used to increase driver’s risk awareness around crossings. To date, no research has evaluated the potential effects of such technologies at RLXs in terms of safety, traffic and acceptance of the technology. Integrating driving and traffic simulations is a safe and affordable approach for evaluating these effects. This methodology will be implemented in a driving simulator, where we recreated realistic driving scenario with typical road environments and realistic traffic. This paper presents a methodology for evaluating comprehensively potential benefits and negative effects of such interventions: this methodology evaluates driver awareness at RLXs , driver distraction and workload when using the technology . Subjective assessment on perceived usefulness and ease of use of the technology is obtained from standard questionnaires. Driving simulation will provide a model of driving behaviour at RLXs which will be used to estimate the effects of such new technology on a road network featuring RLX for different market penetrations using a traffic simulation. This methodology can assist in evaluating future safety interventions at RLXs.