328 resultados para Investor attention
A discrete-trial approach to the functional analysis of aggressive behaviour in two boys with autism
Resumo:
Intervention to reduce challenging behaviour may be enhanced when based on a prior functional analysis. The present study describes a discrete-trial approach for the functional analysis of aggressive behaviour in two boys with autism. Twenty brief assessment trials were conducted in the classroom by the teacher under each of three conditions (i.e., attention, task and tangible). The results showed a clear pattern to each child's aggressive behaviour and suggested logical intervention strategies, although the study is limited because it involved only two children. The discrete-trial approach would appear to represent a practical and ecologically valid technique for conducting a functional analysis of challenging behaviour in applied settings
Resumo:
Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.
Temperature variation and emergency hospital admissions for stroke in Brisbane, Australia, 1996-2005
Resumo:
Stroke is a leading cause of disability and death. This study evaluated the association between temperature variation and emergency admissions for stroke in Brisbane, Australia. Daily emergency admissions for stroke, meteorologic and air pollution data were obtained for the period of January 1996 to December 2005. The relative risk of emergency admissions for stroke was estimated with a generalized estimating equations (GEE) model. For primary intracerebral hemorrhage (PIH) emergency admissions, the average daily PIH for the group aged < 65 increased by 15% (95% Confidence Interval (CI): 5, 26%) and 12% (95% CI: 2, 22%) for a 1°C increase in daily maximum temperature and minimum temperature in summer, respectively, after controlling for potential confounding effects of humidity and air pollutants. For ischemic stroke (IS) emergency admissions, the average daily IS for the group aged ≥ 65 decreased by 3% (95% CI: -6, 0%) for a 1°C increase in daily maximum temperature in winter after adjustment for confounding factors. Temperature variation was significantly associated with emergency admissions for stroke, and its impact varied with different type of stroke. Health authorities should pay greater attention to possible increasing emergency care for strokes when temperature changes, in both summer and winter.
Resumo:
Communication is one team process factor that has received considerable research attention in the team literature. This literature provides equivocal evidence regarding the role of communication in team performance and yet, does not provide any evidence for when communication becomes important for team performance. This research program sought to address this evidence gap by a) testing task complexity and team member diversity (race diversity, gender diversity and work value diversity) as moderators of the team communication — performance relationship; and b) testing a team communication — performance model using established teams across two different task types. The functional perspective was used as the theoretical framework for operationalizing team communication activity. The research program utilised a quasi-experimental research design with participants from a large multi-national information technology company whose Head Office was based in Sydney, Australia. Participants voluntarily completed two team building exercises (a decision making and production task), and completed two online questionnaires. In total, data were collected from 1039 individuals who constituted 203 work teams. Analysis of the data revealed a small number of significant moderation effects, not all in the expected direction. However, an interesting and unexpected finding also emerged from Study One. Large and significant correlations between communication activity ratings were found across tasks, but not within tasks. This finding suggested that teams were displaying very similar profiles of communication on each task, despite the tasks having different communication requirements. Given this finding, Study Two sought to a) determine the relative importance of task versus team effects in explaining variance in team communication measures for established teams; b) determine if established teams had reliable and discernable team communication profiles and if so, c) investigate whether team communication profiles related to task performance. Multi-level modeling and repeated measures analysis of variance (ANOVA) revealed that task type did not have an effect on team communication ratings. However, teams accounted for 24% of the total variance in communication measures. Through cluster analysis, five reliable and distinct team communication profiles were identified. Consistent with the findings of the multi-level analysis and repeated measures ANOVA, teams’ profiles were virtually identical across the decision making and production tasks. A relationship between communication profile and performance was identified for the production task, although not for the decision making task. This research responds to calls in the literature for a better understanding of when communication becomes important for team performance. The moderators tested in this research were not found to have a substantive or reliable effect on the relationship between communication and performance. However, the consistency in team communication activity suggests that established teams can be characterized by their communication profiles and further, that these communication profiles may have implications for team performance. The findings of this research provide theoretical support for the functional perspective in terms of the communication – performance relationship and further support the team development literature as an explanation for the stability in team communication profiles. This research can also assist organizations to better understand the specific types of communication activity and profiles of communication that could offer teams a performance advantage.
Resumo:
Suicide has drawn much attention from both the scientific community and the public. Examining the impact of socio-environmental factors on suicide is essential in developing suicide prevention strategies and interventions, because it will provide health authorities with important information for their decision-making. However, previous studies did not examine the impact of socio-environmental factors on suicide using a spatial analysis approach. The purpose of this study was to identify the patterns of suicide and to examine how socio-environmental factors impact on suicide over time and space at the Local Governmental Area (LGA) level in Queensland. The suicide data between 1999 and 2003 were collected from the Australian Bureau of Statistics (ABS). Socio-environmental variables at the LGA level included climate (rainfall, maximum and minimum temperature), Socioeconomic Indexes for Areas (SEIFA) and demographic variables (proportion of Indigenous population, unemployment rate, proportion of population with low income and low education level). Climate data were obtained from Australian Bureau of Meteorology. SEIFA and demographic variables were acquired from ABS. A series of statistical and geographical information system (GIS) approaches were applied in the analysis. This study included two stages. The first stage used average annual data to view the spatial pattern of suicide and to examine the association between socio-environmental factors and suicide over space. The second stage examined the spatiotemporal pattern of suicide and assessed the socio-environmental determinants of suicide, using more detailed seasonal data. In this research, 2,445 suicide cases were included, with 1,957 males (80.0%) and 488 females (20.0%). In the first stage, we examined the spatial pattern and the determinants of suicide using 5-year aggregated data. Spearman correlations were used to assess associations between variables. Then a Poisson regression model was applied in the multivariable analysis, as the occurrence of suicide is a small probability event and this model fitted the data quite well. Suicide mortality varied across LGAs and was associated with a range of socio-environmental factors. The multivariable analysis showed that maximum temperature was significantly and positively associated with male suicide (relative risk [RR] = 1.03, 95% CI: 1.00 to 1.07). Higher proportion of Indigenous population was accompanied with more suicide in male population (male: RR = 1.02, 95% CI: 1.01 to 1.03). There was a positive association between unemployment rate and suicide in both genders (male: RR = 1.04, 95% CI: 1.02 to 1.06; female: RR = 1.07, 95% CI: 1.00 to 1.16). No significant association was observed for rainfall, minimum temperature, SEIFA, proportion of population with low individual income and low educational attainment. In the second stage of this study, we undertook a preliminary spatiotemporal analysis of suicide using seasonal data. Firstly, we assessed the interrelations between variables. Secondly, a generalised estimating equations (GEE) model was used to examine the socio-environmental impact on suicide over time and space, as this model is well suited to analyze repeated longitudinal data (e.g., seasonal suicide mortality in a certain LGA) and it fitted the data better than other models (e.g., Poisson model). The suicide pattern varied with season and LGA. The north of Queensland had the highest suicide mortality rate in all the seasons, while there was no suicide case occurred in the southwest. Northwest had consistently higher suicide mortality in spring, autumn and winter. In other areas, suicide mortality varied between seasons. This analysis showed that maximum temperature was positively associated with suicide among male population (RR = 1.24, 95% CI: 1.04 to 1.47) and total population (RR = 1.15, 95% CI: 1.00 to 1.32). Higher proportion of Indigenous population was accompanied with more suicide among total population (RR = 1.16, 95% CI: 1.13 to 1.19) and by gender (male: RR = 1.07, 95% CI: 1.01 to 1.13; female: RR = 1.23, 95% CI: 1.03 to 1.48). Unemployment rate was positively associated with total (RR = 1.40, 95% CI: 1.24 to 1.59) and female (RR=1.09, 95% CI: 1.01 to 1.18) suicide. There was also a positive association between proportion of population with low individual income and suicide in total (RR = 1.28, 95% CI: 1.10 to 1.48) and male (RR = 1.45, 95% CI: 1.23 to 1.72) population. Rainfall was only positively associated with suicide in total population (RR = 1.11, 95% CI: 1.04 to 1.19). There was no significant association for rainfall, minimum temperature, SEIFA, proportion of population with low educational attainment. The second stage is the extension of the first stage. Different spatial scales of dataset were used between the two stages (i.e., mean yearly data in the first stage, and seasonal data in the second stage), but the results are generally consistent with each other. Compared with other studies, this research explored the variety of the impact of a wide range of socio-environmental factors on suicide in different geographical units. Maximum temperature, proportion of Indigenous population, unemployment rate and proportion of population with low individual income were among the major determinants of suicide in Queensland. However, the influence from other factors (e.g. socio-culture background, alcohol and drug use) influencing suicide cannot be ignored. An in-depth understanding of these factors is vital in planning and implementing suicide prevention strategies. Five recommendations for future research are derived from this study: (1) It is vital to acquire detailed personal information on each suicide case and relevant information among the population in assessing the key socio-environmental determinants of suicide; (2) Bayesian model could be applied to compare mortality rates and their socio-environmental determinants across LGAs in future research; (3) In the LGAs with warm weather, high proportion of Indigenous population and/or unemployment rate, concerted efforts need to be made to control and prevent suicide and other mental health problems; (4) The current surveillance, forecasting and early warning system needs to be strengthened, to trace the climate and socioeconomic change over time and space and its impact on population health; (5) It is necessary to evaluate and improve the facilities of mental health care, psychological consultation, suicide prevention and control programs; especially in the areas with low socio-economic status, high unemployment rate, extreme weather events and natural disasters.
Resumo:
Changes in load characteristics, deterioration with age, environmental influences and random actions may cause local or global damage in structures, especially in bridges, which are designed for long life spans. Continuous health monitoring of structures will enable the early identification of distress and allow appropriate retrofitting in order to avoid failure or collapse of the structures. In recent times, structural health monitoring (SHM) has attracted much attention in both research and development. Local and global methods of damage assessment using the monitored information are an integral part of SHM techniques. In the local case, the assessment of the state of a structure is done either by direct visual inspection or using experimental techniques such as acoustic emission, ultrasonic, magnetic particle inspection, radiography and eddy current. A characteristic of all these techniques is that their application requires a prior localization of the damaged zones. The limitations of the local methodologies can be overcome by using vibration-based methods, which give a global damage assessment. The vibration-based damage detection methods use measured changes in dynamic characteristics to evaluate changes in physical properties that may indicate structural damage or degradation. The basic idea is that modal parameters (notably frequencies, mode shapes, and modal damping) are functions of the physical properties of the structure (mass, damping, and stiffness). Changes in the physical properties will therefore cause changes in the modal properties. Any reduction in structural stiffness and increase in damping in the structure may indicate structural damage. This research uses the variations in vibration parameters to develop a multi-criteria method for damage assessment. It incorporates the changes in natural frequencies, modal flexibility and modal strain energy to locate damage in the main load bearing elements in bridge structures such as beams, slabs and trusses and simple bridges involving these elements. Dynamic computer simulation techniques are used to develop and apply the multi-criteria procedure under different damage scenarios. The effectiveness of the procedure is demonstrated through numerical examples. Results show that the proposed method incorporating modal flexibility and modal strain energy changes is competent in damage assessment in the structures treated herein.
Resumo:
As the ultimate corporate decision-makers, directors have an impact on the investment time horizons of the corporations they govern. How they make investment decisions has been profoundly influenced by the expansion of the investment chain and the increasing concentration of share ownership in institutional hands. By examining agency in light of legal theory, we highlight that the board is in fact sui generis and not an agent of shareholders. Consequently, transparency can lead to directors being 'captured' by institutional investor objectives and timeframes, potentially to the detriment of the corporation as a whole. The counter-intuitive conclusion is that transparency may, under certain conditions, undermine good corporate governance and lead to excessive short-termism.
Resumo:
This article considers the distinctive ways in which the Special Broadcasting Service (SBS) has evolved over its history since 1980, and how it has managed competing claims to being a multicultural yet broad-appeal broadcaster, and a comprehensive yet low-cost media service. It draws attention to the challenges presented by a global rethinking of the nature of citizenship and its relationship to media, for which SBS is well placed as a leader, and the challenges of online media for traditional public service media models, where SBS has arguably been a laggard, particularly when compared with the Australian Broadcasting Corporation (ABC). It notes recent work that has been undertaken by the author with others into user-created content strategies at SBS and how its online news and current affairs services have been evolving in recent years.
Resumo:
The inquiry documented in this thesis is located at the nexus of technological innovation and traditional schooling. As we enter the second decade of a new century, few would argue against the increasingly urgent need to integrate digital literacies with traditional academic knowledge. Yet, despite substantial investments from governments and businesses, the adoption and diffusion of contemporary digital tools in formal schooling remain sluggish. To date, research on technology adoption in schools tends to take a deficit perspective of schools and teachers, with the lack of resources and teacher ‘technophobia’ most commonly cited as barriers to digital uptake. Corresponding interventions that focus on increasing funding and upskilling teachers, however, have made little difference to adoption trends in the last decade. Empirical evidence that explicates the cultural and pedagogical complexities of innovation diffusion within long-established conventions of mainstream schooling, particularly from the standpoint of students, is wanting. To address this knowledge gap, this thesis inquires into how students evaluate and account for the constraints and affordances of contemporary digital tools when they engage with them as part of their conventional schooling. It documents the attempted integration of a student-led Web 2.0 learning initiative, known as the Student Media Centre (SMC), into the schooling practices of a long-established, high-performing independent senior boys’ school in urban Australia. The study employed an ‘explanatory’ two-phase research design (Creswell, 2003) that combined complementary quantitative and qualitative methods to achieve both breadth of measurement and richness of characterisation. In the initial quantitative phase, a self-reported questionnaire was administered to the senior school student population to determine adoption trends and predictors of SMC usage (N=481). Measurement constructs included individual learning dispositions (learning and performance goals, cognitive playfulness and personal innovativeness), as well as social and technological variables (peer support, perceived usefulness and ease of use). Incremental predictive models of SMC usage were conducted using Classification and Regression Tree (CART) modelling: (i) individual-level predictors, (ii) individual and social predictors, and (iii) individual, social and technological predictors. Peer support emerged as the best predictor of SMC usage. Other salient predictors include perceived ease of use and usefulness, cognitive playfulness and learning goals. On the whole, an overwhelming proportion of students reported low usage levels, low perceived usefulness and a lack of peer support for engaging with the digital learning initiative. The small minority of frequent users reported having high levels of peer support and robust learning goal orientations, rather than being predominantly driven by performance goals. These findings indicate that tensions around social validation, digital learning and academic performance pressures influence students’ engagement with the Web 2.0 learning initiative. The qualitative phase that followed provided insights into these tensions by shifting the analytics from individual attitudes and behaviours to shared social and cultural reasoning practices that explain students’ engagement with the innovation. Six indepth focus groups, comprising 60 students with different levels of SMC usage, were conducted, audio-recorded and transcribed. Textual data were analysed using Membership Categorisation Analysis. Students’ accounts converged around a key proposition. The Web 2.0 learning initiative was useful-in-principle but useless-in-practice. While students endorsed the usefulness of the SMC for enhancing multimodal engagement, extending peer-topeer networks and acquiring real-world skills, they also called attention to a number of constraints that obfuscated the realisation of these design affordances in practice. These constraints were cast in terms of three binary formulations of social and cultural imperatives at play within the school: (i) ‘cool/uncool’, (ii) ‘dominant staff/compliant student’, and (iii) ‘digital learning/academic performance’. The first formulation foregrounds the social stigma of the SMC among peers and its resultant lack of positive network benefits. The second relates to students’ perception of the school culture as authoritarian and punitive with adverse effects on the very student agency required to drive the innovation. The third points to academic performance pressures in a crowded curriculum with tight timelines. Taken together, findings from both phases of the study provide the following key insights. First, students endorsed the learning affordances of contemporary digital tools such as the SMC for enhancing their current schooling practices. For the majority of students, however, these learning affordances were overshadowed by the performative demands of schooling, both social and academic. The student participants saw engagement with the SMC in-school as distinct from, even oppositional to, the conventional social and academic performance indicators of schooling, namely (i) being ‘cool’ (or at least ‘not uncool’), (ii) sufficiently ‘compliant’, and (iii) achieving good academic grades. Their reasoned response therefore, was simply to resist engagement with the digital learning innovation. Second, a small minority of students seemed dispositionally inclined to negotiate the learning affordances and performance constraints of digital learning and traditional schooling more effectively than others. These students were able to engage more frequently and meaningfully with the SMC in school. Their ability to adapt and traverse seemingly incommensurate social and institutional identities and norms is theorised as cultural agility – a dispositional construct that comprises personal innovativeness, cognitive playfulness and learning goals orientation. The logic then is ‘both and’ rather than ‘either or’ for these individuals with a capacity to accommodate both learning and performance in school, whether in terms of digital engagement and academic excellence, or successful brokerage across multiple social identities and institutional affiliations within the school. In sum, this study takes us beyond the familiar terrain of deficit discourses that tend to blame institutional conservatism, lack of resourcing and teacher resistance for low uptake of digital technologies in schools. It does so by providing an empirical base for the development of a ‘third way’ of theorising technological and pedagogical innovation in schools, one which is more informed by students as critical stakeholders and thus more relevant to the lived culture within the school, and its complex relationship to students’ lives outside of school. It is in this relationship that we find an explanation for how these individuals can, at the one time, be digital kids and analogue students.
Resumo:
Arabic satellite television has recently attracted tremendous attention in both the academic and professional worlds, with a special interest in Aljazeera as a curious phenomenon in the Arab region. Having made a household name for itself worldwide with the airing of the Bin Laden tapes, Aljazeera has set out to deliberately change the culture of Arabic journalism, as it has been repeatedly stated by its current General Manager Waddah Khanfar, and to shake up the Arab society by raising awareness to issues never discussed on television before and challenging long-established social and cultural values and norms while promoting, as it claims, Arab issues from a presumably Arab perspective. Working within the meta-frame of democracy, this Qatari-based network station has been received with mixed reactions ranging from complete support to utter rejection in both the west and the Arab world. This research examines the social semiotics of Arabic television and the socio-cultural impact of translation-mediated news in Arabic satellite television, with the aim to carry out a qualitative content analysis, informed by framing theory, critical linguistic analysis, social semiotics and translation theory, within a re-mediation framework which rests on the assumption that a medium “appropriates the techniques, forms and social significance of other media and attempts to rival or refashion them in the name of the real" (Bolter and Grusin, 2000: 66). This is a multilayered research into how translation operates at two different yet interwoven levels: translation proper, that is the rendition of discourse from one language into another at the text level, and translation as a broader process of interpretation of social behaviour that is driven by linguistic and cultural forms of another medium resulting in new social signs generated from source meaning reproduced as target meaning that is bound to be different in many respects. The research primarily focuses on the news media, news making and reporting at Arabic satellite television and looks at translation as a reframing process of news stories in terms of content and cultural values. This notion is based on the premise that by its very nature, news reporting is a framing process, which involves a reconstruction of reality into actualities in presenting the news and providing the context for it. In other words, the mediation of perceived reality through a media form, such as television, actually modifies the mind’s ordering and internal representation of the reality that is presented. The research examines the process of reframing through translation news already framed or actualized in another language and argues that in submitting framed news reports to the translation process several alterations take place, driven by the linguistic and cultural constraints and shaped by the context in which the content is presented. These alterations, which involve recontextualizations, may be intentional or unintentional, motivated or unmotivated. Generally, they are the product of lack of awareness of the dynamics and intricacies of turning a message from one language form into another. More specifically, they are the result of a synthesis process that consciously or subconsciously conforms to editorial policy and cultural interpretive frameworks. In either case, the original message is reproduced and the news is reframed. For the case study, this research examines news broadcasts by the now world-renowned Arabic satellite television station Aljazeera, and to a lesser extent the Lebanese Broadcasting Corporation (LBC) and Al- Arabiya where access is feasible, for comparison and crosschecking purposes. As a new phenomenon in the Arab world, Arabic satellite television, especially 24-hour news and current affairs, provides an interesting area worthy of study, not only for its immediate socio-cultural and professional and ethical implications for the Arabic media in particular, but also for news and current affairs production in the western media that rely on foreign language sources and translation mediation for international stories.
Resumo:
Business Process Management (BPM) has emerged as a popular management approach in both Information Technology (IT) and management practice. While there has been much research on business process modelling and the BPM life cycle, there has been little attention given to managing the quality of a business process during its life cycle. This study addresses this gap by providing a framework for organisations to manage the quality of business processes during different phases of the BPM life cycle. This study employs a multi-method research design which is based on the design science approach and the action research methodology. During the design science phase, the artifacts to model a quality-aware business process were developed. These artifacts were then evaluated through three cycles of action research which were conducted within three large Australian-based organisations. This study contributes to the body of BPM knowledge in a number of ways. Firstly, it presents a quality-aware BPM life cycle that provides a framework on how quality can be incorporated into a business process and subsequently managed during the BPM life cycle. Secondly, it provides a framework to capture and model quality requirements of a business process as a set of measurable elements that can be incorporated into the business process model. Finally, it proposes a novel root cause analysis technique for determining the causes of quality issues within business processes.
Resumo:
In the emerging literature related to destination branding, little has been reported about performance metrics. The focus of most research reported to date has been concerned with the development of destination brand identities and the implementation of campaigns (see for example, Crockett & Wood 1999, Hall 1999, May 2001, Morgan et al 2002). One area requiring increased attention is that of tracking the performance of destination brands over time. This is an important gap in the tourism literature, given: i) the increasing level of investment by destination marketing organisations (DMO) in branding since the 1990s, ii) the complex political nature of DMO brand decision-making and increasing accountability to stakeholders (see Pike, 2005), and iii) the long-term nature of repositioning a destination’s image in the market place (see Gartner & Hunt, 1987). Indeed, a number of researchers in various parts of the world have pointed to a lack of market research monitoring destination marketing objectives, such as in Australia (see Prosser et. al 2000, Carson, Beattie and Gove 2003), North America (Sheehan & Ritchie 1997, Masberg 1999), and Europe (Dolnicar & Schoesser 2003)...
Resumo:
We report numerical analysis and experimental observation of strongly localized plasmons guided by triangular metal wedges and pay special attention to the effect of smooth (nonzero radius) tips. Dispersion, dissipation, and field structure of such wedge plasmons are analyzed using the compact two-dimensional finite-difference time-domain algorithm. Experimental observation is conducted by the end-fire excitation and near-field scanning optical microscope detection of the predicted plasmons on 40°silver nanowedges with the wedge tip radii of 20, 85, and 125 nm that were fabricated by the focused-ion beam method. The effect of smoothing wedge tips is shown to be similar to that of increasing wedge angle. Increasing wedge angle or wedge tip radius results in increasing propagation distance at the same time as decreasing field localization (decreasing wave number). Quantitative differences between the theoretical and experimental propagation distances are suggested to be due to a contribution of scattered bulk and surface waves near the excitation region as well as the addition of losses due to surface roughness. The theoretical and measured propagation distances are several plasmon wavelengths and are useful for a range of nano-optical applications
Resumo:
Here, we demonstrate that efficient nano-optical couplers can be developed using closely spaced gap plasmon waveguides in the form of two parallel nano-sized rectangular slots in a thin metal film or membrane. Using the rigorous numerical finite-difference and finite element algorithms, we investigate the physical mechanisms of coupling between two neighboring gap plasmon waveguides and determine typical coupling lengths for different structural parameters of the coupler. Special attention is focused onto the analysis of the effect of such major coupler parameters, such as thickness of the metal film/membrane, slot width, and separation between the plasmonic waveguides. Detailed physical interpretation of the obtained unusual dependencies of the coupling length on slot width and film thickness is presented based upon the energy consideration. The obtained results will be important for the optimization and experimental development of plasmonic sub-wavelength compact directional couplers and other nano-optical devices for integrated nanophotonics.