54 resultados para Quantum Chromodynamics, Helicity Rates, One-Loop Corrections, Bremsstrahlung Contributions, Heavy Quarks, Standard Model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Entrepreneurial marketing has gained popularity in both the entrepreneurship and marketing disciplines in recent times. The success of ventures that have pursued what are considered non-traditional marketing approaches has been attributed to entrepreneurial marketing practices. Despite the multitude of marketing concepts and models, there are prominent venture successes that do not conform to these and have thus been put in the ''entrepreneurial'' box. One only has to look to the ''Virgin'' model to put this in context. Branson has proven for example that not ''sticking to the knitting'' can work with the ways the Virgin portfolio has been diversified. Consequently, an entrepreneurial orientation is considered a desirable philosophy and has become prominent in such industries as airlines and information technology. Miles and Arnold (1991) found that entrepreneurial orientation is positively correlated to marketing orientation. They propose that entrepreneurial orientation is a strategic response by firms to turbulence in the environment. While many marketing successes are analysed in hindsight using traditional marketing concepts and strategies, there are those that challenge standard marketing textbook recommendations. Marketing strategy is often viewed as a process of targeting, segmenting and positioning (STP). Academics and consultants advocate this approach along with the marketing and business plans. The reality however is that a number of businesses do not practice these and pursue alternative approaches. Other schools of thought and business models have been developing to explain differences in orientation such as branding (Keller 2001), the service-dominant logic (Vargo and Lusch 2004) and effectuation logic (Sarasvathy 2001). This indicates that scholars are now looking to cognate fields to explain a given phenomenon beyond their own disciplines. Bucking this trend is a growing number of researchers working at the interface between entrepreneurship and marketing. There is now an emerging body of work dedicated to this interface, hence the development of entrepreneurial marketing as an alternative to the traditional approaches. Hills and Hultman (2008:3) define entrepreneurial marketing as ''a spirit, an orientation as well as a process of passionately pursuing opportunities and launching and growing ventures that create perceived customer value through relationships by employing innovativeness, creativity, selling, market immersion, networking and flexibility.'' Although it started as a special interest group, entrepreneurial marketing is now gaining recognition in mainstream entrepreneurship and marketing literature. For example new marketing textbooks now incorporate an entrepreneurial marketing focus (Grewal and Levy 2008). The purpose of this paper is to explore what entrepreneurial approaches are used by entrepreneurs and their impact on the success of marketing activities. Methodology/Key Propositions In order to investigate this, we employ two cases: 42Below, vodka producers from New Zealand and Penderyn Distillery, whisky distillers from Wales. The cases were chosen based on the following criteria. Firstly, both companies originate from small economies. Secondly, both make products (spirits) from locations that are not traditionally regarded as producers of their flagship products and thirdly, the two companies are different from each other in terms of their age. Penderyn is an old company established in 1882, whereas 42Below was founded only in 1999. Vodka has never been associated with New Zealand. By the same token, whisky has always been associated with Scotland and Ireland but never been with Wales. Both companies defied traditional stereotypes in marketing their flagship products and found international success. Using a comparative a case study approach, we use Covin and Slevin's (1989) set of items that purport to measure entrepreneurial orientation and apply a qualitative lens on the approaches of both companies. These are: 1. cultural emphases on innovation and R&D 2. high rate of new product introduction 3. bold, innovative product development 4. initiator proactive posture 5. first to introduce new technologies and products 6. competitive posture toward competitor 7. strong prolictivity for high risk, high return projects 8. environment requires boldness to achieve objectives 9. when faced with risk, adopts aggressive, bold posture. Results and Implications We find that both companies have employed entrepreneurial marketing approaches but with different intensities. While acknowledging that they are different from the norm, the specifics of their individual approaches are dissimilar. Both companies have positioned their products at the premium end of their product categories and have emphasised quality and awards in their communication strategies. 42Below has carved an image of irreverence and being non-conformist. They have unashamedly utilised viral marketing and entered international markets by training bartenders and hosting unconventional events. They use edgy language such as vodka university, vodka professors and vodka ambassadors. Penderyn Distillery has taken a more traditional approach to marketing its products and portraying romantic images of age-old tradition of distilling as key to their positioning. Both companies enjoy success as evidenced by industry awards and international acclaim.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The construction industry is a key national economic component. It tends to be at the forefront of cyclic changes in the Australian economy. It has a significant impact, both directly and indirectly, on the efficiency and productivity of other industries. Moreover it affects everyone to a greater or lesser extent; through its products whether they are manifested in the physical infrastructure that supports the operation of the economy or through the built environment that directly impacts on the quality of life experienced by individuals. In financial terms the industry makes one of the largest contributions to the Australian economy, accounting for 4.7 per cent of GDP 1 which was worth over $30B in 20012. The construction industry is comprised of a myriad of small firms, across several important sectors including, o Residential building, o Commercial building, o Building services, o Engineering, o Infrastructure o Facilities Management o Property Development Each sector is typified by firms that have distinctive characteristics such as the number of employees, size and value of contracts, number of jobs, and so forth. It tends to be the case that firms operating in commercial building are larger than those involved in residential construction. The largest contractors are found in engineering and infrastructure, as well as in the commercial building sub-sectors. However all sectors are characterised by their reliance upon sub-contractors to carry out on-site operations. Professionals from the various design consultant groups operate across all of these sectors. This description masks one of the most significant underlying causes of inefficiency in the construction industry, namely its fragmentation. The Construction Industry chapter of the 2004 Australian Year Book3, published by the Australian Bureau of Statistics unmasks the industry’s fragmented structure, typified by the large number of operating businesses within it, the vast majority of which are small companies employing less than 5 people. It identifies over 190,000 firms, of which over 90 percent employ less than 5 people. At the other end of the spectrum, firms employing 20 or more people account for fractionally more than one percent of businesses in the industry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper examines the feasibility of automation of dragline bucket excavators used to strip over-burden from open cut mines. In particular the automatic control of bucket carry angle and bucket trajectory are addressed. Open-loop dynamics of a 1:20 scale model dragline bucket are identified, through measurement of frequency response between carry angle and drag motor input voltage. A strategy for automatic control of carry angle is devised and implemented using bucket angle and rate feedback. System compensation and tuning are explained and closed loop frequency and time responses are measured.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Transport regulators consider that, with respect to pavement damage, heavy vehicles (HVs) are the riskiest vehicles on the road network. That HV suspension design contributes to road and bridge damage has been recognised for some decades. This thesis deals with some aspects of HV suspension characteristics, particularly (but not exclusively) air suspensions. This is in the areas of developing low-cost in-service heavy vehicle (HV) suspension testing, the effects of larger-than-industry-standard longitudinal air lines and the characteristics of on-board mass (OBM) systems for HVs. All these areas, whilst seemingly disparate, seek to inform the management of HVs, reduce of their impact on the network asset and/or provide a measurement mechanism for worn HV suspensions. A number of project management groups at the State and National level in Australia have been, and will be, presented with the results of the project that resulted in this thesis. This should serve to inform their activities applicable to this research. A number of HVs were tested for various characteristics. These tests were used to form a number of conclusions about HV suspension behaviours. Wheel forces from road test data were analysed. A “novel roughness” measure was developed and applied to the road test data to determine dynamic load sharing, amongst other research outcomes. Further, it was proposed that this approach could inform future development of pavement models incorporating roughness and peak wheel forces. Left/right variations in wheel forces and wheel force variations for different speeds were also presented. This led on to some conclusions regarding suspension and wheel force frequencies, their transmission to the pavement and repetitive wheel loads in the spatial domain. An improved method of determining dynamic load sharing was developed and presented. It used the correlation coefficient between two elements of a HV to determine dynamic load sharing. This was validated against a mature dynamic loadsharing metric, the dynamic load sharing coefficient (de Pont, 1997). This was the first time that the technique of measuring correlation between elements on a HV has been used for a test case vs. a control case for two different sized air lines. That dynamic load sharing was improved at the air springs was shown for the test case of the large longitudinal air lines. The statistically significant improvement in dynamic load sharing at the air springs from larger longitudinal air lines varied from approximately 30 percent to 80 percent. Dynamic load sharing at the wheels was improved only for low air line flow events for the test case of larger longitudinal air lines. Statistically significant improvements to some suspension metrics across the range of test speeds and “novel roughness” values were evident from the use of larger longitudinal air lines, but these were not uniform. Of note were improvements to suspension metrics involving peak dynamic forces ranging from below the error margin to approximately 24 percent. Abstract models of HV suspensions were developed from the results of some of the tests. Those models were used to propose further development of, and future directions of research into, further gains in HV dynamic load sharing. This was from alterations to currently available damping characteristics combined with implementation of large longitudinal air lines. In-service testing of HV suspensions was found to be possible within a documented range from below the error margin to an error of approximately 16 percent. These results were in comparison with either the manufacturer’s certified data or test results replicating the Australian standard for “road-friendly” HV suspensions, Vehicle Standards Bulletin 11. OBM accuracy testing and development of tamper evidence from OBM data were detailed for over 2000 individual data points across twelve test and control OBM systems from eight suppliers installed on eleven HVs. The results indicated that 95 percent of contemporary OBM systems available in Australia are accurate to +/- 500 kg. The total variation in OBM linearity, after three outliers in the data were removed, was 0.5 percent. A tamper indicator and other OBM metrics that could be used by jurisdictions to determine tamper events were developed and documented. That OBM systems could be used as one vector for in-service testing of HV suspensions was one of a number of synergies between the seemingly disparate streams of this project.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A group key exchange (GKE) protocol allows a set of parties to agree upon a common secret session key over a public network. In this thesis, we focus on designing efficient GKE protocols using public key techniques and appropriately revising security models for GKE protocols. For the purpose of modelling and analysing the security of GKE protocols we apply the widely accepted computational complexity approach. The contributions of the thesis to the area of GKE protocols are manifold. We propose the first GKE protocol that requires only one round of communication and is proven secure in the standard model. Our protocol is generically constructed from a key encapsulation mechanism (KEM). We also suggest an efficient KEM from the literature, which satisfies the underlying security notion, to instantiate the generic protocol. We then concentrate on enhancing the security of one-round GKE protocols. A new model of security for forward secure GKE protocols is introduced and a generic one-round GKE protocol with forward security is then presented. The security of this protocol is also proven in the standard model. We also propose an efficient forward secure encryption scheme that can be used to instantiate the generic GKE protocol. Our next contributions are to the security models of GKE protocols. We observe that the analysis of GKE protocols has not been as extensive as that of two-party key exchange protocols. Particularly, the security attribute of key compromise impersonation (KCI) resilience has so far been ignored for GKE protocols. We model the security of GKE protocols addressing KCI attacks by both outsider and insider adversaries. We then show that a few existing protocols are not secure against KCI attacks. A new proof of security for an existing GKE protocol is given under the revised model assuming random oracles. Subsequently, we treat the security of GKE protocols in the universal composability (UC) framework. We present a new UC ideal functionality for GKE protocols capturing the security attribute of contributiveness. An existing protocol with minor revisions is then shown to realize our functionality in the random oracle model. Finally, we explore the possibility of constructing GKE protocols in the attribute-based setting. We introduce the concept of attribute-based group key exchange (AB-GKE). A security model for AB-GKE and a one-round AB-GKE protocol satisfying our security notion are presented. The protocol is generically constructed from a new cryptographic primitive called encapsulation policy attribute-based KEM (EP-AB-KEM), which we introduce in this thesis. We also present a new EP-AB-KEM with a proof of security assuming generic groups and random oracles. The EP-AB-KEM can be used to instantiate our generic AB-GKE protocol.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Visualisation provides a method to efficiently convey and understand the complex nature and processes of groundwater systems. This technique has been applied to the Lockyer Valley to aid in comprehending the current condition of the system. The Lockyer Valley in southeast Queensland hosts intensive irrigated agriculture sourcing groundwater from alluvial aquifers. The valley is around 3000 km2 in area and the alluvial deposits are typically 1-3 km wide and to 20-35 m deep in the main channels, reducing in size in subcatchments. The configuration of the alluvium is of a series of elongate “fingers”. In this roughly circular valley recharge to the alluvial aquifers is largely from seasonal storm events, on the surrounding ranges. The ranges are overlain by basaltic aquifers of Tertiary age, which overall are quite transmissive. Both runoff from these ranges and infiltration into the basalts provided ephemeral flow to the streams of the valley. Throughout the valley there are over 5,000 bores extracting alluvial groundwater, plus lesser numbers extracting from underlying sandstone bedrock. Although there are approximately 2500 monitoring bores, the only regularly monitored area is the formally declared management zone in the lower one third. This zone has a calibrated Modflow model (Durick and Bleakly, 2000); a broader valley Modflow model was developed in 2002 (KBR), but did not have extensive extraction data for detailed calibration. Another Modflow model focused on a central area river confluence (Wilson, 2005) with some local production data and pumping test results. A recent subcatchment simulation model incorporates a network of bores with short-period automated hydrographic measurements (Dvoracek and Cox, 2008). The above simulation models were all based on conceptual hydrogeological models of differing scale and detail.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aims: This study determined whether the visibility benefits of positioning retroreflective strips in biological motion configurations were evident at real world road worker sites. ---------- Methods: 20 visually normal drivers (M=40.3 years) participated in this study that was conducted at two road work sites (one suburban and one freeway) on two separate nights. At each site, four road workers walked in place wearing one of four different clothing options: a) standard road worker night vest, b) standard night vest plus retroreflective strips on thighs, c) standard night vest plus retroreflective strips on ankles and knees, d) standard night vest plus retroreflective strips on eight moveable joints (full biomotion). Participants seated in stationary vehicles at three different distances (80m, 160m, 240m) rated the relative conspicuity of the four road workers using a series of a standardized visibility and ranking scales. ---------- Results: Adding retroreflective strips in the full biomotion configuration to the standard night vest significantly (p<0.001) enhanced perceptions of road worker visibility compared to the standard vest alone, or in combination with thigh retroreflective markings. These visibility benefits were evident at all distances and at both sites. Retroreflective markings at the ankles and knees also provided visibility benefits compared to the standard vest, however, the full biomotion configuration was significantly better than all of the other configurations. ---------- Conclusions: These data provide the first evidence that the benefits of biomotion retroreflective markings that have been previously demonstrated under laboratory and closed- and open-road conditions are also evident at real work sites.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Corneal-height data are typically measured with videokeratoscopes and modeled using a set of orthogonal Zernike polynomials. We address the estimation of the number of Zernike polynomials, which is formalized as a model-order selection problem in linear regression. Classical information-theoretic criteria tend to overestimate the corneal surface due to the weakness of their penalty functions, while bootstrap-based techniques tend to underestimate the surface or require extensive processing. In this paper, we propose to use the efficient detection criterion (EDC), which has the same general form of information-theoretic-based criteria, as an alternative to estimating the optimal number of Zernike polynomials. We first show, via simulations, that the EDC outperforms a large number of information-theoretic criteria and resampling-based techniques. We then illustrate that using the EDC for real corneas results in models that are in closer agreement with clinical expectations and provides means for distinguishing normal corneal surfaces from astigmatic and keratoconic surfaces.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The contributions of this thesis fall into three areas of certificateless cryptography. The first area is encryption, where we propose new constructions for both identity-based and certificateless cryptography. We construct an n-out-of- n group encryption scheme for identity-based cryptography that does not require any special means to generate the keys of the trusted authorities that are participating. We also introduce a new security definition for chosen ciphertext secure multi-key encryption. We prove that our construction is secure as long as at least one authority is uncompromised, and show that the existing constructions for chosen ciphertext security from identity-based encryption also hold in the group encryption case. We then consider certificateless encryption as the special case of 2-out-of-2 group encryption and give constructions for highly efficient certificateless schemes in the standard model. Among these is the first construction of a lattice-based certificateless encryption scheme. Our next contribution is a highly efficient certificateless key encapsulation mechanism (KEM), that we prove secure in the standard model. We introduce a new way of proving the security of certificateless schemes based that are based on identity-based schemes. We leave the identity-based part of the proof intact, and just extend it to cover the part that is introduced by the certificateless scheme. We show that our construction is more efficient than any instanciation of generic constructions for certificateless key encapsulation in the standard model. The third area where the thesis contributes to the advancement of certificateless cryptography is key agreement. Swanson showed that many certificateless key agreement schemes are insecure if considered in a reasonable security model. We propose the first provably secure certificateless key agreement schemes in the strongest model for certificateless key agreement. We extend Swanson's definition for certificateless key agreement and give more power to the adversary. Our new schemes are secure as long as each party has at least one uncompromised secret. Our first construction is in the random oracle model and gives the adversary slightly more capabilities than our second construction in the standard model. Interestingly, our standard model construction is as efficient as the random oracle model construction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The behaviour of ion channels within cardiac and neuronal cells is intrinsically stochastic in nature. When the number of channels is small this stochastic noise is large and can have an impact on the dynamics of the system which is potentially an issue when modelling small neurons and drug block in cardiac cells. While exact methods correctly capture the stochastic dynamics of a system they are computationally expensive, restricting their inclusion into tissue level models and so approximations to exact methods are often used instead. The other issue in modelling ion channel dynamics is that the transition rates are voltage dependent, adding a level of complexity as the channel dynamics are coupled to the membrane potential. By assuming that such transition rates are constant over each time step, it is possible to derive a stochastic differential equation (SDE), in the same manner as for biochemical reaction networks, that describes the stochastic dynamics of ion channels. While such a model is more computationally efficient than exact methods we show that there are analytical problems with the resulting SDE as well as issues in using current numerical schemes to solve such an equation. We therefore make two contributions: develop a different model to describe the stochastic ion channel dynamics that analytically behaves in the correct manner and also discuss numerical methods that preserve the analytical properties of the model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is not uncommon for enterprises today to be faced with the demand to integrate and incor- porate many different and possibly heterogeneous systems which are generally independently designed and developed, to allow seamless access. In effect, the integration of these systems results in one large whole system that must be able, at the same time, to maintain the local autonomy and to continue working as an independent entity. This problem has introduced a new distributed architecture called federated systems. The most challenging issue in federated systems is to find answers for the question of how to efficiently cooperate while preserving their autonomous characteristic, especially the security autonomy. This thesis intends to address this issue. The thesis reviews the evolution of the concept of federated systems and discusses the organisational characteristics as well as remaining security issues with the existing approaches. The thesis examines how delegation can be used as means to achieve better security, especially authorisation while maintaining autonomy for the participating member of the federation. A delegation taxonomy is proposed as one of the main contributions. The major contribution of this thesis is to study and design a mechanism to support dele- gation within and between multiple security domains with constraint management capability. A novel delegation framework is proposed including two modules: Delegation Constraint Man- agement module and Policy Management module. The first module is designed to effectively create, track and manage delegation constraints, especially for delegation processes which require re-delegation (indirect delegation). The first module employs two algorithms to trace the root authority of a delegation constraint chain and to prevent the potential conflict when creating a delegation constraint chain if necessary. The first module is designed for conflict prevention not conflict resolution. The second module is designed to support the first module via the policy comparison capability. The major function of this module is to provide the delegation framework the capability to compare policies and constraints (written under the format of a policy). The module is an extension of Lin et al.'s work on policy filtering and policy analysis. Throughout the thesis, some case studies are used as examples to illustrate the discussed concepts. These two modules are designed to capture one of the most important aspects of the delegation process: the relationships between the delegation transactions and the involved constraints, which are not very well addressed by the existing approaches. This contribution is significant because the relationships provide information to keep track and en- force the involved delegation constraints and, therefore, play a vital role in maintaining and enforcing security for transactions across multiple security domains.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aims: This study determined whether the visibility benefits of positioning retroreflective strips in biological motion configurations were evident at real world road worker sites. Methods: 20 visually normal drivers (M=40.3 years) participated in this study that was conducted at two road work sites (one suburban and one freeway) on two separate nights. At each site, four road workers walked in place wearing one of four different clothing options: a) standard road worker night vest, b) standard night vest plus retroreflective strips on thighs, c) standard night vest plus retroreflective strips on ankles and knees, d) standard night vest plus retroreflective strips on eight moveable joints (full biomotion). Participants seated in stationary vehicles at three different distances (80m, 160m, 240m) rated the relative conspicuity of the four road workers using a series of a standardized visibility and ranking scales. Results: Adding retroreflective strips in the full biomotion configuration to the standard night vest significantly (p<0.001) enhanced perceptions of road worker visibility compared to the standard vest alone, or in combination with thigh retroreflective markings. These visibility benefits were evident at all distances and at both sites. Retroreflective markings at the ankles and knees also provided visibility benefits compared to the standard vest, however, the full biomotion configuration was significantly better than all of the other configurations. Conclusions: These data provide the first evidence that the benefits of biomotion retroreflective markings that have been previously demonstrated under laboratory and closed- and open-road conditions are also evident at real work sites.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sequence data often have competing signals that are detected by network programs or Lento plots. Such data can be formed by generating sequences on more than one tree, and combining the results, a mixture model. We report that with such mixture models, the estimates of edge (branch) lengths from maximum likelihood (ML) methods that assume a single tree are biased. Based on the observed number of competing signals in real data, such a bias of ML is expected to occur frequently. Because network methods can recover competing signals more accurately, there is a need for ML methods allowing a network. A fundamental problem is that mixture models can have more parameters than can be recovered from the data, so that some mixtures are not, in principle, identifiable. We recommend that network programs be incorporated into best practice analysis, along with ML and Bayesian trees.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

From human biomonitoring data that are increasingly collected in the United States, Australia, and in other countries from large-scale field studies, we obtain snap-shots of concentration levels of various persistent organic pollutants (POPs) within a cross section of the population at different times. Not only can we observe the trends within this population with time, but we can also gain information going beyond the obvious time trends. By combining the biomonitoring data with pharmacokinetic modeling, we can re-construct the time-variant exposure to individual POPs, determine their intrinsic elimination half-lives in the human body, and predict future levels of POPs in the population. Different approaches have been employed to extract information from human biomonitoring data. Pharmacokinetic (PK) models were combined with longitudinal data1, with single2 or multiple3 average concentrations of a cross-sectional data (CSD), or finally with multiple CSD with or without empirical exposure data4. In the latter study, for the first time, the authors based their modeling outputs on two sets of CSD and empirical exposure data, which made it possible that their model outputs were further constrained due to the extensive body of empirical measurements. Here we use a PK model to analyze recent levels of PBDE concentrations measured in the Australian population. In this study, we are able to base our model results on four sets5-7 of CSD; we focus on two PBDE congeners that have been shown3,5,8-9 to differ in intake rates and half-lives with BDE-47 being associated with high intake rates and a short half-life and BDE-153 with lower intake rates and a longer half-life. By fitting the model to PBDE levels measured in different age groups in different years, we determine the level of intake of BDE-47 and BDE-153, as well as the half-lives of these two chemicals in the Australian population.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one pre-processes the source image and template/model with a bank of filters (e.g. oriented edges, Gabor, etc.) as: (i) it can handle substantial illumination variations, (ii) the inefficient pre-processing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, (iii) unlike traditional LK the computational cost is invariant to the number of filters and as a result far more efficient, and (iv) this approach can be extended to the inverse compositional form of the LK algorithm where nearly all steps (including Fourier transform and filter bank pre-processing) can be pre-computed leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to non-rigid object alignment tasks that are considered extensions of the LK algorithm such as those found in Active Appearance Models (AAMs).