912 resultados para Bias-Variance Trade-off
Resumo:
Principal Topic: In this study we investigate how strategic orientation moderates the impact of growth on profitability for a sample of Danish high growth (Gazelle) firms. ---------- Firm growth has been an essential part of both management research and entrepreneurship research for decades (e.g. Penrose 1959, Birch 1987, Storey 1994). From a societal point of view, firm growth has been perceived as economic generator and job creator. In entrepreneurship research, growth has been an important part of the field (Davidsson, Delmar and Wiklund 2006), and many have used growth as a measure of success. In strategic management, growth has been seen as an approach to achieve competitive advantages and a way of becoming increasing profitable (e.g. Russo and Fouts 1997, Cho and Pucic 2005). However, although firm growth used to be perceived as a natural pathway to profitability recently more skepticism has emerged due to both new theoretical development and new empirical insights. Empirically, studies show inconsistent and inconclusive empirical evidence regarding the impact of growth on profitability. Our review reveals that some studies find a substantial positive relationship, some find a weak positive relationship, some find no relationship and further some find a negative relationship. Overall, two dominant yet divergent theoretical positions can be identified. The first position, mainly focusing on the environmental fit, argues that firms are likely to become more profitable if they enter a market quickly and on a larger scale due to first mover advantages and economic of scale. The second position, mainly focusing the internal fit, argues that growth may lead to a range of internal challenges and difficulties, including rapid change in structure, reward systems, decision making, communication and management style. The inconsistent empirical results together with two divergent theoretical positions call for further investigations into the circumstances by which growth generate profitability and into the circumstances by which growth do not generate profitability. In this project, we investigate how strategic orientations influence the impact of growth on profitability by asking the following research question: How is the impact of growth on profitability moderated by strategic orientation? Based on a literature review of how growth impacts profitability in areas such as entrepreneurship, strategic management and strategic entrepreneurship we develop three hypotheses regarding the growth-profitability relationship and strategic orientation as a potential moderator. ---------- Methodology/Key Propositions: The three hypotheses are tested on data collected in 2008. All firms in Denmark, including all listed and non-listed (VAT-registered) firms who experienced a 100 % growth and had a positive sales or gross profit over a four years period (2004-2007) were surveyed. In total 2,475 fulfilled the requirements. Among those 1,107 firms returned usable questionnaires satisfactory giving us a response rate on 45 %. The financial data together with data on number of employees were obtained from D&B (previously Dun & Bradstreet). The remaining data were obtained through the survey. Hierarchical regression models with ROA (return on assets) as the dependent variable were used to test the hypotheses. In the first model control variables including region, industry, firm age, CEO age, CEO gender, CEO education and number of employees were entered. In the second model, growth measured as growth in employees was entered. Then strategic orientation (differentiation, cost leadership, focus differentiation and focus cost leadership) and then interaction effects of strategic orientation and growth were entered in the model. ---------- Results and Implications: The results show a positive impact of firm growth on profitability and further that this impact is moderated by strategic orientation. Specifically, it was found that growth has a larger impact on profitability when firms do not pursue a focus strategy including both focus differentiation and focus cost leadership. Our preliminary interpretation of the results suggests that the value of growth depends on the circumstances and more specifically 'how much is left to fight for'. It seems like those firms who target towards a narrow segment are less likely to gain value of growth. The remaining market shares to fight for to these firms are not large enough to compensate for the cost of growing. Based on our findings, it therefore seems like growth has a more positive relationship with profitability for those who approach a broad market segment. Furthermore we argue that firms pursuing af Focus strategy will have more specialized assets that decreases the possibilities of further profitable expansion. For firms, CEOs, board of directors etc., the study shows that high growth is not necessarily something worth aiming for. It is a trade-off between the cost of growing and the value of growing. For many firms, there might be better ways of generating profitability in the long run. It depends on the strategic orientation of the firm. For advisors and consultants, the conditional value of growth implies that in-depth knowledge on their clients' situation is necessary before any advice can be given. And finally, for policy makers, it means they have to be careful when initiating new policies to promote firm growth. They need to take into consideration firm strategy and industry conditions.
Resumo:
Information fusion in biometrics has received considerable attention. The architecture proposed here is based on the sequential integration of multi-instance and multi-sample fusion schemes. This method is analytically shown to improve the performance and allow a controlled trade-off between false alarms and false rejects when the classifier decisions are statistically independent. Equations developed for detection error rates are experimentally evaluated by considering the proposed architecture for text dependent speaker verification using HMM based digit dependent speaker models. The tuning of parameters, n classifiers and m attempts/samples, is investigated and the resultant detection error trade-off performance is evaluated on individual digits. Results show that performance improvement can be achieved even for weaker classifiers (FRR-19.6%, FAR-16.7%). The architectures investigated apply to speaker verification from spoken digit strings such as credit card numbers in telephone or VOIP or internet based applications.
Resumo:
Generative music algorithms frequently operate by making musical decisions in a sequence, with each step of the sequence incorporating the local musical context in the decision process. The context is generally a short window of past musical actions. What is not generally included in the context is future actions. For real-time systems this is because the future is unknown. Offline systems also frequently utilise causal algorithms either for reasons of efficiency [1] or to simulate perceptual constraints [2]. However, even real-time agents can incorporate knowledge of their own future actions by utilising some form of planning. We argue that for rhythmic generation the incorporation of a limited form of planning - anticipatory timing - offers a worthwhile trade-off between musical salience and efficiency. We give an example of a real-time generative agent - the Jambot - that utilises anticipatory timing for rhythmic generation. We describe its operation, and compare its output with and without anticipatory timing.
Resumo:
The mechanisms of helicopter flight create a unique, high-vibration environment which can play havoc with the accurate operation of on-board sensors. Vibration isolation of electronic sensors from structural borne oscillations is paramount to their reliable and accurate use. Effective isolation is achieved by realising a trade-off between the properties of the suspended instrument package, and the isolation mechanism. This is made more difficult as the weight and size of the sensors and computing hardware decreases with advances in technology. This paper presents a history of the design, challenges, constraints and construction of an integrated isolated vision and sensor platform and landing gear for the CSIRO autonomous X-Cell helicopter. The results of isolation performance and in-flight tests of the platform in autonomous flight are presented.
Resumo:
As Web searching becomes more prolific for information access worldwide, we need to better understand users’ Web searching behaviour and develop better models of their interaction with Web search systems. Web search modelling is a significant and important area of Web research. Searching on the Web is an integral element of information behaviour and human–computer interaction. Web searching includes multitasking processes, the allocation of cognitive resources among several tasks, and shifts in cognitive, problem and knowledge states. In addition to multitasking, cognitive coordination and cognitive shifts are also important, but are under-explored aspects of Web searching. During the Web searching process, beyond physical actions, users experience various cognitive activities. Interactive Web searching involves many users’ cognitive shifts at different information behaviour levels. Cognitive coordination allows users to trade off the dependences among multiple information tasks and the resources available. Much research has been conducted into Web searching. However, few studies have modelled the nature of and relationship between multitasking, cognitive coordination and cognitive shifts in the Web search context. Modelling how Web users interact with Web search systems is vital for the development of more effective Web IR systems. This study aims to model the relationship between multitasking, cognitive coordination and cognitive shifts during Web searching. A preliminary theoretical model is presented based on previous studies. The research is designed to validate the preliminary model. Forty-two study participants were involved in the empirical study. A combination of data collection instruments, including pre- and post-questionnaires, think-aloud protocols, search logs, observations and interviews were employed to obtain users’ comprehensive data during Web search interactions. Based on the grounded theory approach, qualitative analysis methods including content analysis and verbal protocol analysis were used to analyse the data. The findings were inferred through an analysis of questionnaires, a transcription of think-aloud protocols, the Web search logs, and notes on observations and interviews. Five key findings emerged. (1) Multitasking during Web searching was demonstrated as a two-dimensional behaviour. The first dimension was represented as multiple information problems searching by task switching. Users’ Web searching behaviour was a process of multiple tasks switching, that is, from searching on one information problem to searching another. The second dimension of multitasking behaviour was represented as an information problem searching within multiple Web search sessions. Users usually conducted Web searching on a complex information problem by submitting multiple queries, using several Web search systems and opening multiple windows/tabs. (2) Cognitive shifts were the brain’s internal response to external stimuli. Cognitive shifts were found as an essential element of searching interactions and users’ Web searching behaviour. The study revealed two kinds of cognitive shifts. The first kind, the holistic shift, included users’ perception on the information problem and overall information evaluation before and after Web searching. The second kind, the state shift, reflected users’ changes in focus between the different cognitive states during the course of Web searching. Cognitive states included users’ focus on the states of topic, strategy, evaluation, view and overview. (3) Three levels of cognitive coordination behaviour were identified: the information task coordination level, the coordination mechanism level, and the strategy coordination level. The three levels of cognitive coordination behaviour interplayed to support multiple information tasks switching. (4) An important relationship existed between multitasking, cognitive coordination and cognitive shifts during Web searching. Cognitive coordination as a management mechanism bound together other cognitive processes, including multitasking and cognitive shifts, in order to move through users’ Web searching process. (5) Web search interaction was shown to be a multitasking process which included information problems ordering, task switching and task and mental coordinating; also, at a deeper level, cognitive shifts took place. Cognitive coordination was the hinge behaviour linking multitasking and cognitive shifts. Without cognitive coordination, neither multitasking Web searching behaviour nor the complicated mental process of cognitive shifting could occur. The preliminary model was revisited with these empirical findings. A revised theoretical model (MCC Model) was built to illustrate the relationship between multitasking, cognitive coordination and cognitive shifts during Web searching. Implications and limitations of the study are also discussed, along with future research work.
Resumo:
Physical infrastructure assets are important components of our society and our economy. They are usually designed to last for many years, are expected to be heavily used during their lifetime, carry considerable load, and are exposed to the natural environment. They are also normally major structures, and therefore present a heavy investment, requiring constant management over their life cycle to ensure that they perform as required by their owners and users. Given a complex and varied infrastructure life cycle, constraints on available resources, and continuing requirements for effectiveness and efficiency, good management of infrastructure is important. While there is often no one best management approach, the choice of options is improved by better identification and analysis of the issues, by the ability to prioritise objectives, and by a scientific approach to the analysis process. The abilities to better understand the effect of inputs in the infrastructure life cycle on results, to minimise uncertainty, and to better evaluate the effect of decisions in a complex environment, are important in allocating scarce resources and making sound decisions. Through the development of an infrastructure management modelling and analysis methodology, this thesis provides a process that assists the infrastructure manager in the analysis, prioritisation and decision making process. This is achieved through the use of practical, relatively simple tools, integrated in a modular flexible framework that aims to provide an understanding of the interactions and issues in the infrastructure management process. The methodology uses a combination of flowcharting and analysis techniques. It first charts the infrastructure management process and its underlying infrastructure life cycle through the time interaction diagram, a graphical flowcharting methodology that is an extension of methodologies for modelling data flows in information systems. This process divides the infrastructure management process over time into self contained modules that are based on a particular set of activities, the information flows between which are defined by the interfaces and relationships between them. The modular approach also permits more detailed analysis, or aggregation, as the case may be. It also forms the basis of ext~nding the infrastructure modelling and analysis process to infrastructure networks, through using individual infrastructure assets and their related projects as the basis of the network analysis process. It is recognised that the infrastructure manager is required to meet, and balance, a number of different objectives, and therefore a number of high level outcome goals for the infrastructure management process have been developed, based on common purpose or measurement scales. These goals form the basis of classifYing the larger set of multiple objectives for analysis purposes. A two stage approach that rationalises then weights objectives, using a paired comparison process, ensures that the objectives required to be met are both kept to the minimum number required and are fairly weighted. Qualitative variables are incorporated into the weighting and scoring process, utility functions being proposed where there is risk, or a trade-off situation applies. Variability is considered important in the infrastructure life cycle, the approach used being based on analytical principles but incorporating randomness in variables where required. The modular design of the process permits alternative processes to be used within particular modules, if this is considered a more appropriate way of analysis, provided boundary conditions and requirements for linkages to other modules, are met. Development and use of the methodology has highlighted a number of infrastructure life cycle issues, including data and information aspects, and consequences of change over the life cycle, as well as variability and the other matters discussed above. It has also highlighted the requirement to use judgment where required, and for organisations that own and manage infrastructure to retain intellectual knowledge regarding that infrastructure. It is considered that the methodology discussed in this thesis, which to the author's knowledge has not been developed elsewhere, may be used for the analysis of alternatives, planning, prioritisation of a number of projects, and identification of the principal issues in the infrastructure life cycle.
Resumo:
This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent
Resumo:
This paper investigates the control of a HVDC link, fed from an AC source through a controlled rectifier and feeding an AC line through a controlled inverter. The overall objective is to maintain maximum possible link voltage at the inverter while regulating the link current. In this paper the practical feedback design issues are investigated with a view of obtaining simple, robust designs that are easy to evaluate for safety and operability. The investigations are applicable to back-to-back links used for frequency decoupling and to long DC lines. The design issues discussed include: (i) a review of overall system dynamics to establish the time scale of different feedback loops and to highlight feedback design issues; (ii) the concept of using the inverter firing angle control to regulate link current when the rectifier firing angle controller saturates; and (iii) the design issues for the individual controllers including robust design for varying line conditions and the trade-off between controller complexity and the reduction of nonlinearity and disturbance effects
Resumo:
Traffic conflicts at railway junctions are very conmon, particularly on congested rail lines. While safe passage through the junction is well maintained by the signalling and interlocking systems, minimising the delays imposed on the trains by assigning the right-of-way sequence sensibly is a bonus to the quality of service. A deterministic method has been adopted to resolve the conflict, with the objective of minimising the total weighted delay. However, the computational demand remains significant. The applications of different heuristic methods to tackle this problem are reviewed and explored, elaborating their feasibility in various aspects and comparing their relative merits for further studies. As most heuristic methods do not guarantee a global optimum, this study focuses on the trade-off between computation time and optimality of the resolution.
Resumo:
In this paper we present a novel distributed coding protocol for multi-user cooperative networks. The proposed distributed coding protocol exploits the existing orthogonal space-time block codes to achieve higher diversity gain by repeating the code across time and space (available relay nodes). The achievable diversity gain depends on the number of relay nodes that can fully decode the signal from the source. These relay nodes then form space-time codes to cooperatively relay to the destination using number of time slots. However, the improved diversity gain is archived at the expense of the transmission rate. The design principles of the proposed space-time distributed code and the issues related to transmission rate and diversity trade off is discussed in detail. We show that the proposed distributed space-time coding protocol out performs existing distributed codes with a variable transmission rate.
Resumo:
For the first time in human history, large volumes of spoken audio are being broadcast, made available on the internet, archived, and monitored for surveillance every day. New technologies are urgently required to unlock these vast and powerful stores of information. Spoken Term Detection (STD) systems provide access to speech collections by detecting individual occurrences of specified search terms. The aim of this work is to develop improved STD solutions based on phonetic indexing. In particular, this work aims to develop phonetic STD systems for applications that require open-vocabulary search, fast indexing and search speeds, and accurate term detection. Within this scope, novel contributions are made within two research themes, that is, accommodating phone recognition errors and, secondly, modelling uncertainty with probabilistic scores. A state-of-the-art Dynamic Match Lattice Spotting (DMLS) system is used to address the problem of accommodating phone recognition errors with approximate phone sequence matching. Extensive experimentation on the use of DMLS is carried out and a number of novel enhancements are developed that provide for faster indexing, faster search, and improved accuracy. Firstly, a novel comparison of methods for deriving a phone error cost model is presented to improve STD accuracy, resulting in up to a 33% improvement in the Figure of Merit. A method is also presented for drastically increasing the speed of DMLS search by at least an order of magnitude with no loss in search accuracy. An investigation is then presented of the effects of increasing indexing speed for DMLS, by using simpler modelling during phone decoding, with results highlighting the trade-off between indexing speed, search speed and search accuracy. The Figure of Merit is further improved by up to 25% using a novel proposal to utilise word-level language modelling during DMLS indexing. Analysis shows that this use of language modelling can, however, be unhelpful or even disadvantageous for terms with a very low language model probability. The DMLS approach to STD involves generating an index of phone sequences using phone recognition. An alternative approach to phonetic STD is also investigated that instead indexes probabilistic acoustic scores in the form of a posterior-feature matrix. A state-of-the-art system is described and its use for STD is explored through several experiments on spontaneous conversational telephone speech. A novel technique and framework is proposed for discriminatively training such a system to directly maximise the Figure of Merit. This results in a 13% improvement in the Figure of Merit on held-out data. The framework is also found to be particularly useful for index compression in conjunction with the proposed optimisation technique, providing for a substantial index compression factor in addition to an overall gain in the Figure of Merit. These contributions significantly advance the state-of-the-art in phonetic STD, by improving the utility of such systems in a wide range of applications.
Resumo:
The city of Scottsdale Arizona implemented the first fixed photo Speed Enforcement camera demonstration Program (SEP) on a US freeway in 2006. A comprehensive before-and-after analysis of the impact of the SEP on safety revealed significant reductions in crash frequency and severity, which indicates that the SEP is a promising countermeasure for improving safety. However, there is often a trade off between safety and mobility when safety investments are considered. As a result, identifying safety countermeasures that both improve safety and reduce Travel Time Variability (TTV) is a desirable goal for traffic safety engineers. This paper reports on the analysis of the mobility impacts of the SEP by simulating the traffic network with and without the SEP, calibrated to real world conditions. The simulation results show that the SEP decreased the TTV: the risk of unreliable travel was at least 23% higher in the ‘without SEP’ scenario than in the ‘with SEP’ scenario. In addition, the total Travel Time Savings (TTS) from the SEP was estimated to be at least ‘569 vehicle-hours/year.’ Consequently, the SEP is an efficient countermeasure not only for reducing crashes but also for improving mobility through TTS and reduced TTV.
Resumo:
A significant proportion of the cost of software development is due to software testing and maintenance. This is in part the result of the inevitable imperfections due to human error, lack of quality during the design and coding of software, and the increasing need to reduce faults to improve customer satisfaction in a competitive marketplace. Given the cost and importance of removing errors improvements in fault detection and removal can be of significant benefit. The earlier in the development process faults can be found, the less it costs to correct them and the less likely other faults are to develop. This research aims to make the testing process more efficient and effective by identifying those software modules most likely to contain faults, allowing testing efforts to be carefully targeted. This is done with the use of machine learning algorithms which use examples of fault prone and not fault prone modules to develop predictive models of quality. In order to learn the numerical mapping between module and classification, a module is represented in terms of software metrics. A difficulty in this sort of problem is sourcing software engineering data of adequate quality. In this work, data is obtained from two sources, the NASA Metrics Data Program, and the open source Eclipse project. Feature selection before learning is applied, and in this area a number of different feature selection methods are applied to find which work best. Two machine learning algorithms are applied to the data - Naive Bayes and the Support Vector Machine - and predictive results are compared to those of previous efforts and found to be superior on selected data sets and comparable on others. In addition, a new classification method is proposed, Rank Sum, in which a ranking abstraction is laid over bin densities for each class, and a classification is determined based on the sum of ranks over features. A novel extension of this method is also described based on an observed polarising of points by class when rank sum is applied to training data to convert it into 2D rank sum space. SVM is applied to this transformed data to produce models the parameters of which can be set according to trade-off curves to obtain a particular performance trade-off.
Resumo:
As the need for concepts such as cancellation and OR-joins occurs naturally in business scenarios, comprehensive support in a workflow language is desirable. However, there is a clear trade-off between the expressive power of a language (i.e., introducing complex constructs such as cancellation and OR-joins) and ease of verification. When a workflow contains a large number of tasks and involves complex control flow dependencies, verification can take too much time or it may even be impossible. There are a number of different approaches to deal with this complexity. Reducing the size of the workflow, while preserving its essential properties with respect to a particular analysis problem, is one such approach. In this paper, we present a set of reduction rules for workflows with cancellation regions and OR-joins and demonstrate how they can be used to improve the efficiency of verification. Our results are presented in the context of the YAWL workflow language.
Resumo:
One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this phenomenon is related to the distribution of margins of the training examples with respect to the generated voting classification rule, where the margin of an example is simply the difference between the number of correct votes and the maximum number of votes received by any incorrect label. We show that techniques used in the analysis of Vapnik's support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. We also show theoretically and experimentally that boosting is especially effective at increasing the margins of the training examples. Finally, we compare our explanation to those based on the bias-variance decomposition.