896 resultados para Conditional sales


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this globalized environment, Taiwanese firms have been very successful in achieving growth via international market expansion. In particular, the Taiwanese electronics industry has shown a dynamism lacking in comparable industries around the world. However, in recent years there has been a move by many of the larger Taiwanese manufacturing firms to outsource their manufacturing to low-cost producers such as China in order to remain competitive. Conversely, most Taiwanese small- to medium-sized enterprises (SMEs) have retained their production facilities in Taiwan. These SMEs seek to expand their sales beyond the domestic market by employing an export strategy, making a significant socioeconomic contribution to the domestic and regional economies. This paper highlights the key dimensions such as enhancing factors (benefits/advantages), inhibiting factors (barriers/costs), and managerial factors (characteristics/commitment) that play an important role in the internationalization of SMEs located within the Taiwanese electronics industry. A logistic regression model is used to predict the probability of a firm being an exporter.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study aims to examine the impact of socio-ecologic factors on the transmission of Ross River virus (RRV) infection and to identify areas prone to social and ecologic-driven epidemics in Queensland, Australia. We used a Bayesian spatiotemporal conditional autoregressive model to quantify the relationship between monthly variation of RRV incidence and socio-ecologic factors and to determine spatiotemporal patterns. Our results show that the average increase in monthly RRV incidence was 2.4% (95% credible interval (CrI): 0.1–4.5%) and 2.0% (95% CrI: 1.6–2.3%) for a 1°C increase in monthly average maximum temperature and a 10 mm increase in monthly average rainfall, respectively. A significant spatiotemporal variation and interactive effect between temperature and rainfall on RRV incidence were found. No association between Socio-economic Index for Areas (SEIFA) and RRV was observed. The transmission of RRV in Queensland, Australia appeared to be primarily driven by ecologic variables rather than social factors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The traditional searching method for model-order selection in linear regression is a nested full-parameters-set searching procedure over the desired orders, which we call full-model order selection. On the other hand, a method for model-selection searches for the best sub-model within each order. In this paper, we propose using the model-selection searching method for model-order selection, which we call partial-model order selection. We show by simulations that the proposed searching method gives better accuracies than the traditional one, especially for low signal-to-noise ratios over a wide range of model-order selection criteria (both information theoretic based and bootstrap-based). Also, we show that for some models the performance of the bootstrap-based criterion improves significantly by using the proposed partial-model selection searching method. Index Terms— Model order estimation, model selection, information theoretic criteria, bootstrap 1. INTRODUCTION Several model-order selection criteria can be applied to find the optimal order. Some of the more commonly used information theoretic-based procedures include Akaike’s information criterion (AIC) [1], corrected Akaike (AICc) [2], minimum description length (MDL) [3], normalized maximum likelihood (NML) [4], Hannan-Quinn criterion (HQC) [5], conditional model-order estimation (CME) [6], and the efficient detection criterion (EDC) [7]. From a practical point of view, it is difficult to decide which model order selection criterion to use. Many of them perform reasonably well when the signal-to-noise ratio (SNR) is high. The discrepancies in their performance, however, become more evident when the SNR is low. In those situations, the performance of the given technique is not only determined by the model structure (say a polynomial trend versus a Fourier series) but, more importantly, by the relative values of the parameters within the model. This makes the comparison between the model-order selection algorithms difficult as within the same model with a given order one could find an example for which one of the methods performs favourably well or fails [6, 8]. Our aim is to improve the performance of the model order selection criteria in cases where the SNR is low by considering a model-selection searching procedure that takes into account not only the full-model order search but also a partial model order search within the given model order. Understandably, the improvement in the performance of the model order estimation is at the expense of additional computational complexity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sustainability Declarations were introduced by the Queensland State Government on 1 January 2010 as a compulsory measure for all dwelling sales. The purpose of this policy decision was to improve the relevance of sustainability in the home ownership decision making process. This paper assesses the initial impact of this initiative over its first year in operation. In partnership with the Real Estate Institute of Queensland, real estate agents and salespeople in Queensland were surveyed to determine what impact the Sustainability Declaration has had on home buyer decision making. The level of compliance by the real estate industry was also reviewed. These preliminary findings indicate a high level of compliance from the real estate industry, however results confirm that sustainability is yet to become a criterion of relevance to the majority of home buyers in Queensland. The Sustainability Declarations are a first step in raising awareness in home owners of the importance of sustainability in housing. Further monitoring of this impact will be carried out over time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The launch of the Apple iPad on January 2010 has seen considerable interest from the newspaper and publishing industry in developing content and business models for the tablet PC device that can address the limits of both the print and online news and information media products. It is early days in the iPad’s evolution, and we wait to see what competitor devices will emerge in the near future. It is apparent, however, that it has become a significant “niche” product, with considerable potential for mass market expansion over the next few years, possibly at the expense of netbook sales. The scope for the iPad and tablet PCs to become a “fourth screen” for users, alongside the TV, PC and mobile phone, is in early stages of evolution. The study used five criteria to assess iPad apps: • Content: timeliness; archive; personalisation; content depth; advertisements; the use of multimedia; and the extent to which the content was in sync with the provider brand. • Useability: degree of static content; ability to control multimedia; file size; page clutter; resolution; signposts; and customisation. • Interactivity: hyperlinks; ability to contribute content or provide feedback to news items; depth of multimedia; search function; ability to use plug-ins and linking; ability to highlight, rate and/or save items; functions that may facilitate a community of users. • Transactions capabilities: ecommerce functionality; purchase and download process; user privacy and transaction security. • Openness: degree of linking to outside sources; reader contribution processes; anonymity measures; and application code ownership.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In many product categories of durable goods such as TV, PC, and DVD player, the largest component of sales is generated by consumers replacing existing units. Aggregate sales models proposed by diffusion of innovation researchers for the replacement component of sales have incorporated several different replacement distributions such as Rayleigh, Weibull, Truncated Normal and Gamma. Although these alternative replacement distributions have been tested using both time series sales data and individual-level actuarial “life-tables” of replacement ages, there is no census on which distributions are more appropriate to model replacement behaviour. In the current study we are motivated to develop a new “modified gamma” distribution by two reasons. First we recognise that replacements have two fundamentally different drivers – those forced by failure and early, discretionary replacements. The replacement distribution for each of these drivers is expected to be quite different. Second, we observed a poor fit of other distributions to out empirical data. We conducted a survey of 8,077 households to empirically examine models of replacement sales for six electronic consumer durables – TVs, VCRs, DVD players, digital cameras, personal and notebook computers. This data allows us to construct individual-level “life-tables” for replacement ages. We demonstrate the new modified gamma model fits the empirical data better than existing models for all six products using both a primary and a hold-out sample.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Plant biosecurity requires statistical tools to interpret field surveillance data in order to manage pest incursions that threaten crop production and trade. Ultimately, management decisions need to be based on the probability that an area is infested or free of a pest. Current informal approaches to delimiting pest extent rely upon expert ecological interpretation of presence / absence data over space and time. Hierarchical Bayesian models provide a cohesive statistical framework that can formally integrate the available information on both pest ecology and data. The overarching method involves constructing an observation model for the surveillance data, conditional on the hidden extent of the pest and uncertain detection sensitivity. The extent of the pest is then modelled as a dynamic invasion process that includes uncertainty in ecological parameters. Modelling approaches to assimilate this information are explored through case studies on spiralling whitefly, Aleurodicus dispersus and red banded mango caterpillar, Deanolis sublimbalis. Markov chain Monte Carlo simulation is used to estimate the probable extent of pests, given the observation and process model conditioned by surveillance data. Statistical methods, based on time-to-event models, are developed to apply hierarchical Bayesian models to early detection programs and to demonstrate area freedom from pests. The value of early detection surveillance programs is demonstrated through an application to interpret surveillance data for exotic plant pests with uncertain spread rates. The model suggests that typical early detection programs provide a moderate reduction in the probability of an area being infested but a dramatic reduction in the expected area of incursions at a given time. Estimates of spiralling whitefly extent are examined at local, district and state-wide scales. The local model estimates the rate of natural spread and the influence of host architecture, host suitability and inspector efficiency. These parameter estimates can support the development of robust surveillance programs. Hierarchical Bayesian models for the human-mediated spread of spiralling whitefly are developed for the colonisation of discrete cells connected by a modified gravity model. By estimating dispersal parameters, the model can be used to predict the extent of the pest over time. An extended model predicts the climate restricted distribution of the pest in Queensland. These novel human-mediated movement models are well suited to demonstrating area freedom at coarse spatio-temporal scales. At finer scales, and in the presence of ecological complexity, exploratory models are developed to investigate the capacity for surveillance information to estimate the extent of red banded mango caterpillar. It is apparent that excessive uncertainty about observation and ecological parameters can impose limits on inference at the scales required for effective management of response programs. The thesis contributes novel statistical approaches to estimating the extent of pests and develops applications to assist decision-making across a range of plant biosecurity surveillance activities. Hierarchical Bayesian modelling is demonstrated as both a useful analytical tool for estimating pest extent and a natural investigative paradigm for developing and focussing biosecurity programs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the emergence of multi-core processors into the mainstream, parallel programming is no longer the specialized domain it once was. There is a growing need for systems to allow programmers to more easily reason about data dependencies and inherent parallelism in general purpose programs. Many of these programs are written in popular imperative programming languages like Java and C]. In this thesis I present a system for reasoning about side-effects of evaluation in an abstract and composable manner that is suitable for use by both programmers and automated tools such as compilers. The goal of developing such a system is to both facilitate the automatic exploitation of the inherent parallelism present in imperative programs and to allow programmers to reason about dependencies which may be limiting the parallelism available for exploitation in their applications. Previous work on languages and type systems for parallel computing has tended to focus on providing the programmer with tools to facilitate the manual parallelization of programs; programmers must decide when and where it is safe to employ parallelism without the assistance of the compiler or other automated tools. None of the existing systems combine abstraction and composition with parallelization and correctness checking to produce a framework which helps both programmers and automated tools to reason about inherent parallelism. In this work I present a system for abstractly reasoning about side-effects and data dependencies in modern, imperative, object-oriented languages using a type and effect system based on ideas from Ownership Types. I have developed sufficient conditions for the safe, automated detection and exploitation of a number task, data and loop parallelism patterns in terms of ownership relationships. To validate my work, I have applied my ideas to the C] version 3.0 language to produce a language extension called Zal. I have implemented a compiler for the Zal language as an extension of the GPC] research compiler as a proof of concept of my system. I have used it to parallelize a number of real-world applications to demonstrate the feasibility of my proposed approach. In addition to this empirical validation, I present an argument for the correctness of the type system and language semantics I have proposed as well as sketches of proofs for the correctness of the sufficient conditions for parallelization proposed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traffic oscillations are typical features of congested traffic flow that are characterized by recurring decelerations followed by accelerations (stop-and-go driving). The negative environmental impacts of these oscillations are widely accepted, but their impact on traffic safety has been debated. This paper describes the impact of freeway traffic oscillations on traffic safety. This study employs a matched case-control design using high-resolution traffic and crash data from a freeway segment. Traffic conditions prior to each crash were taken as cases, while traffic conditions during the same periods on days without crashes were taken as controls. These were also matched by presence of congestion, geometry and weather. A total of 82 cases and about 80,000 candidate controls were extracted from more than three years of data from 2004 to 2007. Conditional logistic regression models were developed based on the case-control samples. To verify consistency in the results, 20 different sets of controls were randomly extracted from the candidate pool for varying control-case ratios. The results reveal that the standard deviation of speed (thus, oscillations) is a significant variable, with an average odds ratio of about 1.08. This implies that the likelihood of a (rear-end) crash increases by about 8% with an additional unit increase in the standard deviation of speed. The average traffic states prior to crashes were less significant than the speed variations in congestion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The decision of the High Court in Butcher v Lachlan Elder Realty Pty Ltd [2004] HCA 60 involves issues that affect every person who is induced to buy real estate in Australia by statements in sales brochures distributed by real estate agents. One of these issues is the extent to which estate agents unwittingly engage in misleading or deceptive conduct under s 52 of the Trade Practices Act 1974 (Cth) (‘the Act’) when they distribute sales brochures that contain untrue or misleading statements prepared by others. A further issue is the extent to which agents can escape liability by relying on disclaimers about the authenticity of false statements contained in brochures prepared by them.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A pervasive and puzzling feature of banks’ Value-at-Risk (VaR) is its abnormally high level, which leads to excessive regulatory capital. A possible explanation for the tendency of commercial banks to overstate their VaR is that they incompletely account for the diversification effect among broad risk categories (e.g., equity, interest rate, commodity, credit spread, and foreign exchange). By underestimating the diversification effect, bank’s proprietary VaR models produce overly prudent market risk assessments. In this paper, we examine empirically the validity of this hypothesis using actual VaR data from major US commercial banks. In contrast to the VaR diversification hypothesis, we find that US banks show no sign of systematic underestimation of the diversification effect. In particular, diversification effects used by banks is very close to (and quite often larger than) our empirical diversification estimates. A direct implication of this finding is that individual VaRs for each broad risk category, just like aggregate VaRs, are biased risk assessments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite considerable success in treatment of early stage localized prostate cancer (PC), acute inadequacy of late stage PC treatment and its inherent heterogeneity poses a formidable challenge. Clearly, an improved understanding of PC genesis and progression along with the development of new targeted therapies are warranted. Animal models, especially, transgenic immunocompetent mouse models, have proven to be the best ally in this respect. A series of models have been developed by modulation of expression of genes implicated in cancer-genesis and progression; mainly, modulation of expression of oncogenes, steroid hormone receptors, growth factors and their receptors, cell cycle and apoptosis regulators, and tumor suppressor genes have been used. Such models have contributed significantly to our understanding of the molecular and pathological aspects of PC initiation and progression. In particular, the transgenic mouse models based on multiple genetic alterations can more accurately address the inherent complexity of PC, not only in revealing the mechanisms of tumorigenesis and progression but also for clinically relevant evaluation of new therapies. Further, with advances in conditional knockout technologies, otherwise embryonically lethal gene changes can be incorporated leading to the development of new generation transgenics, thus adding significantly to our existing knowledge base. Different models and their relevance to PC research are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of binary classification where the classifier can, for a particular cost, choose not to classify an observation. Just as in the conventional classification problem, minimization of the sample average of the cost is a difficult optimization problem. As an alternative, we propose the optimization of a certain convex loss function φ, analogous to the hinge loss used in support vector machines (SVMs). Its convexity ensures that the sample average of this surrogate loss can be efficiently minimized. We study its statistical properties. We show that minimizing the expected surrogate loss—the φ-risk—also minimizes the risk. We also study the rate at which the φ-risk approaches its minimum value. We show that fast rates are possible when the conditional probability P(Y=1|X) is unlikely to be close to certain critical values.