421 resultados para Horse shows.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report on the use of the hydrogen bond accepting properties of neutral nitrone moieties to prepare benzylic-amide-macrocycle-containing [2]rotaxanes in yields as high as 70 %. X-Ray crystallography shows the presence of up to four intercomponent hydrogen bonds between the amide groups of the macrocycle and the two nitrone groups of the thread. Dynamic 1H NMR studies of the rates of macrocycle pirouetting in nonpolar solutions indicate that amide-nitrone hydrogen bonds are particularly strong, ~1.3 and ~0.2 kcal mol-1 stronger than similar amide-ester and amide-amide interactions, respectively. In addition to polarizing the N-O bond through hydrogen bonding, the rotaxane structure affects the chemistry of the nitrone groups in two significant ways: The intercomponent hydrogen bonding activates the nitrone groups to electrochemical reduction, a one electron reduction of the rotaxane being stablized by a remarkable 400 mV (8.1 kcal mol-1) with respect to the same process in the thread; encapsulation, however, protects the same functional groups from chemical reduction with an external reagent (and slows down electron transfer to and from the electroactive groups in cyclicvoltammetry experiments). Mechanical interlocking with a hydrogen bonding molecular sheath thus provides a route to an encapsulated polarized functional group and radical anions of significant kinetic and thermodynamic stability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background The residue-wise contact order (RWCO) describes the sequence separations between the residues of interest and its contacting residues in a protein sequence. It is a new kind of one-dimensional protein structure that represents the extent of long-range contacts and is considered as a generalization of contact order. Together with secondary structure, accessible surface area, the B factor, and contact number, RWCO provides comprehensive and indispensable important information to reconstructing the protein three-dimensional structure from a set of one-dimensional structural properties. Accurately predicting RWCO values could have many important applications in protein three-dimensional structure prediction and protein folding rate prediction, and give deep insights into protein sequence-structure relationships. Results We developed a novel approach to predict residue-wise contact order values in proteins based on support vector regression (SVR), starting from primary amino acid sequences. We explored seven different sequence encoding schemes to examine their effects on the prediction performance, including local sequence in the form of PSI-BLAST profiles, local sequence plus amino acid composition, local sequence plus molecular weight, local sequence plus secondary structure predicted by PSIPRED, local sequence plus molecular weight and amino acid composition, local sequence plus molecular weight and predicted secondary structure, and local sequence plus molecular weight, amino acid composition and predicted secondary structure. When using local sequences with multiple sequence alignments in the form of PSI-BLAST profiles, we could predict the RWCO distribution with a Pearson correlation coefficient (CC) between the predicted and observed RWCO values of 0.55, and root mean square error (RMSE) of 0.82, based on a well-defined dataset with 680 protein sequences. Moreover, by incorporating global features such as molecular weight and amino acid composition we could further improve the prediction performance with the CC to 0.57 and an RMSE of 0.79. In addition, combining the predicted secondary structure by PSIPRED was found to significantly improve the prediction performance and could yield the best prediction accuracy with a CC of 0.60 and RMSE of 0.78, which provided at least comparable performance compared with the other existing methods. Conclusion The SVR method shows a prediction performance competitive with or at least comparable to the previously developed linear regression-based methods for predicting RWCO values. In contrast to support vector classification (SVC), SVR is very good at estimating the raw value profiles of the samples. The successful application of the SVR approach in this study reinforces the fact that support vector regression is a powerful tool in extracting the protein sequence-structure relationship and in estimating the protein structural profiles from amino acid sequences.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As the popularity of video as an information medium rises, the amount of video content that we produce and archive keeps growing. This creates a demand for shorter representations of videos in order to assist the task of video retrieval. The traditional solution is to let humans watch these videos and write textual summaries based on what they saw. This summarisation process, however, is time-consuming. Moreover, a lot of useful audio-visual information contained in the original video can be lost. Video summarisation aims to turn a full-length video into a more concise version that preserves as much information as possible. The problem of video summarisation is to minimise the trade-off between how concise and how representative a summary is. There are also usability concerns that need to be addressed in a video summarisation scheme. To solve these problems, this research aims to create an automatic video summarisation framework that combines and improves on existing video summarisation techniques, with the focus on practicality and user satisfaction. We also investigate the need for different summarisation strategies in different kinds of videos, for example news, sports, or TV series. Finally, we develop a video summarisation system based on the framework, which is validated by subjective and objective evaluation. The evaluation results shows that the proposed framework is effective for creating video skims, producing high user satisfaction rate and having reasonably low computing requirement. We also demonstrate that the techniques presented in this research can be used for visualising video summaries in the form web pages showing various useful information, both from the video itself and from external sources.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In an era of complex challenges that draw sustained media attention and entangle multiple organisational actors, this thesis addresses the gap between current trends in society and business, and existing scholarship in public relations and crisis communication. By responding to calls from crisis communication researchers to develop theory (Coombs, 2006a), to examine the interdependencies of crises (Seeger, Sellnow, & Ulmer, 1998), and to consider variation in crisis response (Seeger, 2002), this thesis contributes to theory development in crisis communication and public relations. Through transformative change, this thesis extends existing scholarship built on a preservation or conservation logic where public relations is used to maintain stability by incrementally responding to changes in an organisation‘s environment (Cutlip, Center, & Broom, 2006; Everett, 2001; Grunig, 2000; Spicer, 1997). Based on the opportunity to contribute to ongoing theoretical development in the literature, the overall research problem guiding this thesis asks: How does transformative change during crisis influence corporate actors’ communication? This thesis adopts punctuated equilibrium theory, which describes change as alternating between long periods of stability and short periods of revolutionary or transformative change (Gersick, 1991; Romanelli & Tushman, 1994; Siggelkow, 2002; Tushman, Newman, & Romanelli, 1986; Tushman & Romanelli, 1985). As a theory for change, punctuated equilibrium provides an opportunity to examine public relations and transformative change, building on scholarship that is based primarily on incremental change. Further, existing scholarship in public relations and crisis communication focuses on the actions of single organisations in situational or short-term crisis events. Punctuated equilibrium theory enables the study of multiple crises and multiple organisational responses during transformative change. In doing so, punctuated equilibrium theory provides a framework to explain both the context for transformative change and actions or strategies enacted by organisations during transformative change (Tushman, Newman, & Romanelli, 1986; Tushman & Romanelli, 1985; Tushman, Virany, & Romanelli, 1986). The connections between context and action inform the research questions that guide this thesis: RQ1: What symbolic and substantive strategies persist and change as crises develop from situational events to transformative and multiple linked events? RQ2: What features of the crisis context influence changes in symbolic and substantive strategies? To shed light on these research questions, the thesis adopts a qualitative approach guided by process theory and methods to explicate the events, sequences and activities that were essential to change (Pettigrew, 1992; Van de Ven, 1992). Specifically, the thesis draws on an alternative template strategy (Langley, 1999) that provides several alternative interpretations of the same events (Allison, 1971; Allison & Zelikow, 1999). Following Allison (1971) and Allison and Zelikow (1999), this thesis uses three alternative templates of crisis or strategic response typologies to construct three narratives using media articles and organisational documents. The narratives are compared to identify and draw out different patterns of crisis communication strategies that operate within different crisis contexts. The thesis is based on the crisis events that affected three organisations within the pharmaceutical industry for four years. The primary organisation is Merck, as its product recall crisis triggered transformative change affecting, in different ways, the secondary organisations of Pfizer and Novartis. Three narratives are presented based on the crisis or strategic response typologies of Coombs (2006b), Allen and Caillouet (1994), and Oliver (1991). The findings of this thesis reveal different stories about crisis communication under transformative change. By zooming in to a micro perspective (Nicolini, 2009) to focus on the crisis communication and actions of a single organisation and zooming out to a macro perspective (Nicolini, 2009) to consider multiple organisations, new insights about crisis communication, change and the relationships among multiple organisations are revealed at context and action levels. At the context level, each subsequent narrative demonstrates greater connections among multiple corporate actors. By zooming out from Coombs‘ (2006b) focus on single organisations to consider Allen and Caillouet‘s (1994) integration of the web of corporate actors, the thesis demonstrates how corporate actors add accountability pressures to the primary organisation. Next, by zooming further out to the macro perspective by considering Oliver‘s (1991) strategic responses to institutional processes, the thesis reveals a greater range of corporate actors that are caught up in the process of transformative change and accounts for their varying levels of agency over their environment. By zooming in to a micro perspective and out to a macro perspective (Nicolini, 2009) across alternative templates, the thesis sheds light on sequences, events, and actions of primary and secondary organisations. Although the primary organisation remains the focus of sustained media attention across the four-year time frame, the secondary organisations, even when one faced a similar starting situation to the primary organisation, were buffered by the process of transformative change. This understanding of crisis contexts in transforming environments builds on existing knowledge in crisis communication. At the action level, the thesis also reveals different interpretations from each alternative template. Coombs‘ (2006b) narrative shows persistence in the primary organisation‘s crisis or strategic responses over the four-year time frame of the thesis. That is, the primary organisation consistently applies a diminish crisis response. At times, the primary organisation drew on denial responses when corporate actors questioned its legitimacy or actions. To close the crisis, the primary organisation uses a rebuild crisis posture (Coombs, 2006). These finding are replicated in Allen and Caillouet‘s (1994) narrative, noting this template‘s limitation to communication messages only. Oliver‘s (1991) narrative is consistent with Coombs‘ (2006b) but also demonstrated a shift from a strategic response that signals conformity to the environment to one that signals more active resistance to the environment over time. Specifically, the primary organisation‘s initial response demonstrates conformity but these same messages were used some three years later to set new expectations in the environment in order to shape criteria and build acceptance for future organisational decisions. In summary, the findings demonstrate the power of crisis or strategic responses when considered over time and in the context of transformative change. The conclusions of this research contribute to scholarship in the public relations and management literatures. Based on the significance of organisational theory, the primary contribution of the theory relates to the role of interorganisational linkages or legitimacy buffers that form during the punctuation of equilibrium. The network of linkages among the corporate actors are significant also to the crisis communication literature as they form part of the process model of crisis communication under punctuated equilibrium. This model extends existing research that focuses on crisis communication of single organisations to consider the emergent context that incorporates secondary organisations as well as the localised contests of legitimacy and buffers from regulatory authorities. The thesis also provides an empirical base for punctuated equilibrium in public relations and crisis communication, extending Murphy‘s (2000) introduction of the theory to the public relations literature. In doing this, punctuated equilibrium theory reinvigorates theoretical development in crisis communication by extending existing scholarship around incrementalist approaches and demonstrating how public relations works in the context of transformative change. Further research in this area could consider using alternative templates to study transformative change caused by a range of crisis types from natural disasters to product tampering, and to add further insight into the dynamics between primary and secondary organisations. This thesis contributes to practice by providing guidelines for crisis response strategy selection and indicators related to the emergent context for crises under transformative change that will help primary and secondary organisations‘ responses to crises.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently, user tagging systems have grown in popularity on the web. The tagging process is quite simple for ordinary users, which contributes to its popularity. However, free vocabulary has lack of standardization and semantic ambiguity. It is possible to capture the semantics from user tagging and represent those in a form of ontology, but the application of the learned ontology for recommendation making has not been that flourishing. In this paper we discuss our approach to learn domain ontology from user tagging information and apply the extracted tag ontology in a pilot tag recommendation experiment. The initial result shows that by using the tag ontology to re-rank the recommended tags, the accuracy of the tag recommendation can be improved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently, user tagging systems have grown in popularity on the web. The tagging process is quite simple for ordinary users, which contributes to its popularity. However, free vocabulary has lack of standardization and semantic ambiguity. It is possible to capture the semantics from user tagging into some form of ontology, but the application of the resulted ontology for recommendation making has not been that flourishing. In this paper we discuss our approach to learn domain ontology from user tagging information and apply the extracted tag ontology in a pilot tag recommendation experiment. The initial result shows that by using the tag ontology to re-rank the recommended tags, the accuracy of the tag recommendation can be improved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multivariate volatility forecasts are an important input in many financial applications, in particular portfolio optimisation problems. Given the number of models available and the range of loss functions to discriminate between them, it is obvious that selecting the optimal forecasting model is challenging. The aim of this thesis is to thoroughly investigate how effective many commonly used statistical (MSE and QLIKE) and economic (portfolio variance and portfolio utility) loss functions are at discriminating between competing multivariate volatility forecasts. An analytical investigation of the loss functions is performed to determine whether they identify the correct forecast as the best forecast. This is followed by an extensive simulation study examines the ability of the loss functions to consistently rank forecasts, and their statistical power within tests of predictive ability. For the tests of predictive ability, the model confidence set (MCS) approach of Hansen, Lunde and Nason (2003, 2011) is employed. As well, an empirical study investigates whether simulation findings hold in a realistic setting. In light of these earlier studies, a major empirical study seeks to identify the set of superior multivariate volatility forecasting models from 43 models that use either daily squared returns or realised volatility to generate forecasts. This study also assesses how the choice of volatility proxy affects the ability of the statistical loss functions to discriminate between forecasts. Analysis of the loss functions shows that QLIKE, MSE and portfolio variance can discriminate between multivariate volatility forecasts, while portfolio utility cannot. An examination of the effective loss functions shows that they all can identify the correct forecast at a point in time, however, their ability to discriminate between competing forecasts does vary. That is, QLIKE is identified as the most effective loss function, followed by portfolio variance which is then followed by MSE. The major empirical analysis reports that the optimal set of multivariate volatility forecasting models includes forecasts generated from daily squared returns and realised volatility. Furthermore, it finds that the volatility proxy affects the statistical loss functions’ ability to discriminate between forecasts in tests of predictive ability. These findings deepen our understanding of how to choose between competing multivariate volatility forecasts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data preprocessing is widely recognized as an important stage in anomaly detection. This paper reviews the data preprocessing techniques used by anomaly-based network intrusion detection systems (NIDS), concentrating on which aspects of the network traffic are analyzed, and what feature construction and selection methods have been used. Motivation for the paper comes from the large impact data preprocessing has on the accuracy and capability of anomaly-based NIDS. The review finds that many NIDS limit their view of network traffic to the TCP/IP packet headers. Time-based statistics can be derived from these headers to detect network scans, network worm behavior, and denial of service attacks. A number of other NIDS perform deeper inspection of request packets to detect attacks against network services and network applications. More recent approaches analyze full service responses to detect attacks targeting clients. The review covers a wide range of NIDS, highlighting which classes of attack are detectable by each of these approaches. Data preprocessing is found to predominantly rely on expert domain knowledge for identifying the most relevant parts of network traffic and for constructing the initial candidate set of traffic features. On the other hand, automated methods have been widely used for feature extraction to reduce data dimensionality, and feature selection to find the most relevant subset of features from this candidate set. The review shows a trend toward deeper packet inspection to construct more relevant features through targeted content parsing. These context sensitive features are required to detect current attacks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Orthopaedic fracture fixation implants are increasingly being designed using accurate 3D models of long bones based on computer tomography (CT). Unlike CT, magnetic resonance imaging (MRI) does not involve ionising radiation and is therefore a desirable alternative to CT. This study aims to quantify the accuracy of MRI-based 3D models compared to CT-based 3D models of long bones. The femora of five intact cadaver ovine limbs were scanned using a 1.5T MRI and a CT scanner. Image segmentation of CT and MRI data was performed using a multi-threshold segmentation method. Reference models were generated by digitising the bone surfaces free of soft tissue with a mechanical contact scanner. The MRI- and CT-derived models were validated against the reference models. The results demonstrated that the CT-based models contained an average error of 0.15mm while the MRI-based models contained an average error of 0.23mm. Statistical validation shows that there are no significant differences between 3D models based on CT and MRI data. These results indicate that the geometric accuracy of MRI based 3D models was comparable to that of CT-based models and therefore MRI is a potential alternative to CT for generation of 3D models with high geometric accuracy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Distributed Denial-of-Service (DDoS) attacks continue to be one of the most pernicious threats to the delivery of services over the Internet. Not only are DDoS attacks present in many guises, they are also continuously evolving as new vulnerabilities are exploited. Hence accurate detection of these attacks still remains a challenging problem and a necessity for ensuring high-end network security. An intrinsic challenge in addressing this problem is to effectively distinguish these Denial-of-Service attacks from similar looking Flash Events (FEs) created by legitimate clients. A considerable overlap between the general characteristics of FEs and DDoS attacks makes it difficult to precisely separate these two classes of Internet activity. In this paper we propose parameters which can be used to explicitly distinguish FEs from DDoS attacks and analyse two real-world publicly available datasets to validate our proposal. Our analysis shows that even though FEs appear very similar to DDoS attacks, there are several subtle dissimilarities which can be exploited to separate these two classes of events.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis examines the ways in which citizens find out about socio-political issues. The project set out to discover how audience characteristics such as scepticism towards the media, gratifications sought, need for cognition and political interest influence information selection. While most previous information choice studies have focused on how individuals select from a narrow range of media types, this thesis considered a much wider sweep of the information landscape. This approach was taken to obtain an understanding of information choices in a more authentic context - in everyday life, people are not simply restricted to one or two news sources. Rather, they may obtain political information from a vast range of information sources, including media sources (e.g. radio, television, newspapers) and sources from beyond the media (eg. interpersonal sources, public speaking events, social networking websites). Thus, the study included both media and non-news media information sources. Data collection for the project consisted of a written, postal survey. The survey was administered to a probability sample in the greater Brisbane region, which is the third largest city in Australia. Data was collected during March and April 2008, approximately four months after the 2007 Australian Federal Election. Hence, the study was conducted in a non-election context. 585 usable surveys were obtained. In addition to measuring the attitudinal characteristics listed above, respondents were surveyed as to which information sources (eg. television shows, radio stations, websites and festivals) they usually use to find out about socio-political issues. Multiple linear regression analysis was conducted to explore patterns of influence between the audience characteristics and information consumption patterns. The results of this analysis indicated an apparent difference between the way citizens use news media sources and the way they use information sources from beyond the news media. In essence, it appears that non-news media information sources are used very deliberately to seek socio-political information, while media sources are used in a less purposeful way. If media use in a non-election context, such as that of the present study, is not primarily concerned with deliberate information seeking, media use must instead have other primary purposes, with political information acquisition as either a secondary driver, or a by-product of that primary purpose. It appears, then, that political information consumption in a media-saturated society is more about routine ‘practices’ than it is about ‘information seeking’. The suggestion that media use is no longer primarily concerned with information seeking, but rather, is simply a behaviour which occurs within the broader set of everyday practices reflects Couldry’s (2004) media as practice paradigm. These findings highlight the need for more authentic and holistic contexts for media research. It is insufficient to consider information choices in isolation, or even from a wider range of information sources, such as that incorporated in the present study. Future media research must take greater account of the broader social contexts and practices in which media-oriented behaviours occur. The findings also call into question the previously assumed centrality of trust to information selection decisions. Citizens regularly use media they do not trust to find out about politics. If people are willing to use information sources they do not trust for democratically important topics such as politics, it is important that citizens possess the media literacy skills to effectively understand and evaluate the information they are presented with. Without the application of such media literacy skills, a steady diet of ‘fast food’ media may result in uninformed or misinformed voting decisions, which have implications for the effectiveness of democratic processes. This research has emphasized the need for further holistic and authentically contextualised media use research, to better understand how citizens use information sources to find out about important topics such as politics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Graphene, functionalized with oleylamine (OA) and soluble in non-polar organic solvents, was produced on a large scale with a high yield by combining the Hummers process for graphite oxidation, an amine-coupling process to make OA-functionalized graphite oxide (OA-GO), and a novel reduction process using trioctylphosphine (TOP). TOP acts as both a reducing agent and an aggregation-prevention surfactant in the reduction of OA-GO in 1,2-dichlorobenzene (DCB). The reduction of OA-GO is confirmed by X-ray photoelectron spectroscopy, Fourier-transform infrared spectroscopy, X-ray diffraction, thermogravimetric analysis, and Raman spectroscopy. The exfoliation of GO, OA GO, and OA-functionalized graphene (OA-G) is verified by atomic force microscopy. The conductivity of TOP-reduced OA G, which is deduced from the current–voltage characteristics of a vacuum-filtered thin film, shows that the reduction of functionalized GO by TOP is as effective as the reduction of GO by hydrazine.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have grown defect-rich ZnO nanowires on a large scale by the vapour phase reaction method without using any metal catalyst and vacuum system. The defects, including zinc vacancies, oxygen interstitials and oxygen antisites, are related to the excess of oxygen in ZnO nanowires and are controllable. The nanowires having high excess of oxygen exhibit a brown-colour photoluminescence, due to the dominant emission band composed by violet, blue and green emissions. Those having more balanced Zn and O show a dominant green emission, giving rise to a green colour under UV light illumination. By O2-annealing treatment the violet luminescence after the band-edge emission UV peak can be enhanced for as-grown nanowires. However, the green emission shows different changing trends under O2-annealing treatment, associated with the excess of oxygen in the nanowires.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Resolving a noted open problem, we show that the Undirected Feedback Vertex Set problem, parameterized by the size of the solution set of vertices, is in the parameterized complexity class Poly(k), that is, polynomial-time pre-processing is sufficient to reduce an initial problem instance (G, k) to a decision-equivalent simplified instance (G', k') where k' � k, and the number of vertices of G' is bounded by a polynomial function of k. Our main result shows an O(k11) kernelization bound.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The primary goal of the Vehicular Ad Hoc Network (VANET) is to provide real-time safety-related messages to motorists to enhance road safety. Accessing and disseminating safety-related information through the use of wireless communications technology in VANETs should be secured, as motorists may make critical decisions in dealing with an emergency situation based on the received information. If security concerns are not addressed in developing VANET systems, an adversary can tamper with, or suppress, the unprotected message to mislead motorists to cause traffic accidents and hazards. Current research on secure messaging in VANETs focuses on employing the certificate-based Public Key Infrastructure (PKI) scheme to support message encryption and digital signing. The security overhead of such a scheme, however, creates a transmission delay and introduces a time-consuming verification process to VANET communications. This thesis has proposed a novel public key verification and management approach for VANETs; namely, the Public Key Registry (PKR) regime. Compared to the VANET PKI scheme, this new approach can satisfy necessary security requirements with improved performance and scalability, and at a lower cost by reducing the security overheads of message transmission and eliminating digital certificate deployment and maintenance issues. The proposed PKR regime consists of the required infrastructure components, rules for public key management and verification, and a set of interactions and associated behaviours to meet these rule requirements. This is achieved through a system design as a logic process model with functional specifications. The PKR regime can be used as development guidelines for conforming implementations. An analysis and evaluation of the proposed PKR regime includes security features assessment, analysis of the security overhead of message transmission, transmission latency, processing latency, and scalability of the proposed PKR regime. Compared to certificate-based PKI approaches, the proposed PKR regime can maintain the necessary security requirements, significantly reduce the security overhead by approximately 70%, and improve the performance by 98%. Meanwhile, the result of the scalability evaluation shows that the latency of employing the proposed PKR regime stays much lower at approximately 15 milliseconds, whether operating in a huge or small environment. It is therefore believed that this research will create a new dimension to the provision of secure messaging services in VANETs.