118 resultados para performance comparison
Resumo:
Boards of directors are thought to provide access to a wealth of knowledge and resources for the companies they serve, and are considered important to corporate governance. Under the Resource Based View (RBV) of the firm (Wernerfelt, 1984) boards are viewed as a strategic resource available to firms. As a consequence there has been a significant research effort aimed at establishing a link between board attributes and company performance. In this thesis I explore and extend the study of interlocking directorships (Mizruchi, 1996; Scott 1991a) by examining the links between directors’ opportunity networks and firm performance. Specifically, I use resource dependence theory (Pfeffer & Salancik, 1978) and social capital theory (Burt, 1980b; Coleman, 1988) as the basis for a new measure of a board’s opportunity network. I contend that both directors’ formal company ties and their social ties determine a director’s opportunity network through which they are able to access and mobilise resources for their firms. This approach is based on recent studies that suggest the measurement of interlocks at the director level, rather than at the firm level, may be a more reliable indicator of this phenomenon. This research uses publicly available data drawn from Australia’s top-105 listed companies and their directors in 1999. I employ Social Network Analysis (SNA) (Scott, 1991b) using the UCINET software to analyse the individual director’s formal and social networks. SNA is used to measure a the number of ties a director has to other directors in the top-105 company director network at both one and two degrees of separation, that is, direct ties and indirect (or ‘friend of a friend’) ties. These individual measures of director connectedness are aggregated to produce a board-level network metric for comparison with measures of a firm’s performance using multiple regression analysis. Performance is measured with accounting-based and market-based measures. Findings indicate that better-connected boards are associated with higher market-based company performance (measured by Tobin’s q). However, weaker and mostly unreliable associations were found for accounting-based performance measure ROA. Furthermore, formal (or corporate) network ties are a stronger predictor of market performance than total network ties (comprising social and corporate ties). Similarly, strong ties (connectedness at degree-1) are better predictors of performance than weak ties (connectedness at degree-2). My research makes four contributions to the literature on director interlocks. First, it extends a new way of measuring a board’s opportunity network based on the director rather than the company as the unit of interlock. Second, it establishes evidence of a relationship between market-based measures of firm performance and the connectedness of that firm’s board. Third, it establishes that director’s formal corporate ties matter more to market-based firm performance than their social ties. Fourth, it establishes that director’s strong direct ties are more important to market-based performance than weak ties. The thesis concludes with implications for research and practice, including a more speculative interpretation of these results. In particular, I raise the possibility of reverse causality – that is networked directors seek to join high-performing companies. Thus, the relationship may be a result of symbolic action by companies seeking to increase the legitimacy of their firms rather than a reflection of the social capital available to the companies. This is an important consideration worthy of future investigation.
Resumo:
Light gauge steel frame (LSF) structures are increasingly used in commercial and residential buildings because of their non-combustibility, dimensional stability and ease of installation. A common application is in floor-ceiling systems. The LSF floor-ceiling systems must be designed to serve as fire compartment boundaries and provide adequate fire resistance. Fire-rated floor-ceiling assemblies have been increasingly used in buildings. However, limited research has been undertaken in the past and hence a thorough understanding of their fire resistance behaviour is not available. Recently a new composite floor-ceiling system has been developed to provide higher fire rating. But its increased fire rating could not be determined using the currently available design methods. Therefore a research project was conducted to investigate its structural and fire resistance behaviour under standard fire conditions. This paper presents the results of full scale experimental investigations into the structural and fire behaviour of the new LSF floor system protected by the composite ceiling unit. Both the conventional and the new floor systems were tested under structural and fire loads. It demonstrates the improvements provided by the new composite panel system in comparison to conventional floor systems. Numerical studies were also undertaken using the finite element program ABAQUS. Measured temperature profiles of floors were used in the numerical analyses and their results were compared with fire test results. Tests and numerical studies provided a good understanding of the fire behaviour of the LSF floor-ceiling systems and confirmed the superior performance of the new composite system.
Resumo:
The experimental literature and studies using survey data have established that people care a great deal about their relative economic position and not solely, as standard economic theory assumes, about their absolute economic position. Individuals are concerned about social comparisons. However, behavioral evidence in the field is rare. This paper provides an empirical analysis, testing the model of inequality aversion using two unique panel data sets for basketball and soccer players. We find support that the concept of inequality aversion helps to understand how the relative income situation affects performance in a real competitive environment with real tasks and real incentives.
Resumo:
Films of piezoelectric PVDF and P(VDF-TrFE) were exposed to vacuum UV (115-300 nm VUV) and -radiation to investigate how these two forms of radiation affect the chemical, morphological, and piezoelectric properties of the polymers. The extent of crosslinking was almost identical in both polymers after -irradiation, but surprisingly, was significantly higher for the TrFE copolymer after VUV-irradiation. Changes in the melting behavior were also more significant in the TrFE copolymer after VUV-irradiation due to both surface and bulk crosslinking, compared with only surface crosslinking for the PVDF films. The piezoelectric properties (measured using d33 piezoelectric coefficients and D-E hysteresis loops) were unchanged in the PVDF homopolymer, while the TrFE copolymer exhibited more narrow D-E loops after exposure to either - or VUV-radiation. The more severe damage to the TrFE copolymer in comparison with the PVDF homopolymer after VUV-irradiation is explained by different energy deposition characteristics. The short wavelength, highly energetic photons are undoubtedly absorbed in the surface layers of both polymers, and we propose that while the longer wavelength components of the VUV-radiation are absorbed by the bulk of the TrFE copolymer causing crosslinking, they are transmitted harmlessly in the PVDF homopolymer.
Resumo:
Poly(vinylidene fluoride) and copolymers of vinylidene fluoride with hexafluoropropylene, trifluoroethylene and chlorotrifluoroethylene have been exposed to gamma irradiation in vacuum, up to doses of 1MGy under identical conditions, to obtain a ranking of radiation sensitivities. Changes in the tensile properties, crystalline melting points,heats of fusion, gel contents and solvent uptake factors were used as the defining parameters. The initial degree of crystallinity and film processing had the greatest influence on relative radiation damage, although the cross-linked network features were almost identical in their solvent swelling characteristics, regardless of the comonomer composition or content.
Resumo:
The annual income return for rural property is based on two major factors being commodity prices and production yields. Commodity prices paid to rural producers can vary depending on the agricultural policies of their respective countries. Free trade countries, such as Australia and New Zealand are subject to the volatility of the world commodity markets to a greater extent than those farmers in protected or subsidised markets. In countries where rural production is protected or subsidised the annual income received by rural producers has been relatively stable. However, the high cost of agricultural protection is now being questioned, particularly in relation to the increasing economic costs of government services such as health, education and housing. When combined with the agricultural production limitations of climate, topography, chemical residues and disease issues, the impact of commodity prices on rural property income is crucial in the ability of rural producers to enter into or expand their holdings in agricultural land. These problems are then reflected in the volatility of the rural land capital returns and the investment performance of this property class. This paper will address the total and capital return performance of a major agricultural area and compare these returns on the basis of both location of land and land use. The comparison will be used to determine if location or actual land use has a greater influence on rural property capital returns. This performance analysis is based on over 35,000 rural sales transactions. These transactions cover all market based rural property transactions in New South Wales, Australia for the period January 1990 to December 2008. Correlation analysis and investment performance analysis has also been carried out to determine the possible relationships between location and land use and subsequent changes in rural land capital values.
Resumo:
Visual servoing has been a viable method of robot manipulator control for more than a decade. Initial developments involved positionbased visual servoing (PBVS), in which the control signal exists in Cartesian space. The younger method, image-based visual servoing (IBVS), has seen considerable development in recent years. PBVS and IBVS offer tradeoffs in performance, and neither can solve all tasks that may confront a robot. In response to these issues, several methods have been devised that partition the control scheme, allowing some motions to be performed in the manner of a PBVS system, while the remaining motions are performed using an IBVS approach. To date, there has been little research that explores the relative strengths and weaknesses of these methods. In this paper we present such an evaluation. We have chosen three recent visual servo approaches for evaluation in addition to the traditional PBVS and IBVS approaches. We posit a set of performance metrics that measure quantitatively the performance of a visual servo controller for a specific task. We then evaluate each of the candidate visual servo methods for four canonical tasks with simulations and with experiments in a robotic work cell.
Resumo:
The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.
Resumo:
In recent years the development and use of crash prediction models for roadway safety analyses have received substantial attention. These models, also known as safety performance functions (SPFs), relate the expected crash frequency of roadway elements (intersections, road segments, on-ramps) to traffic volumes and other geometric and operational characteristics. A commonly practiced approach for applying intersection SPFs is to assume that crash types occur in fixed proportions (e.g., rear-end crashes make up 20% of crashes, angle crashes 35%, and so forth) and then apply these fixed proportions to crash totals to estimate crash frequencies by type. As demonstrated in this paper, such a practice makes questionable assumptions and results in considerable error in estimating crash proportions. Through the use of rudimentary SPFs based solely on the annual average daily traffic (AADT) of major and minor roads, the homogeneity-in-proportions assumption is shown not to hold across AADT, because crash proportions vary as a function of both major and minor road AADT. For example, with minor road AADT of 400 vehicles per day, the proportion of intersecting-direction crashes decreases from about 50% with 2,000 major road AADT to about 15% with 82,000 AADT. Same-direction crashes increase from about 15% to 55% for the same comparison. The homogeneity-in-proportions assumption should be abandoned, and crash type models should be used to predict crash frequency by crash type. SPFs that use additional geometric variables would only exacerbate the problem quantified here. Comparison of models for different crash types using additional geometric variables remains the subject of future research.
Resumo:
This paper reports on the empirical comparison of seven machine learning algorithms in texture classification with application to vegetation management in power line corridors. Aiming at classifying tree species in power line corridors, object-based method is employed. Individual tree crowns are segmented as the basic classification units and three classic texture features are extracted as the input to the classification algorithms. Several widely used performance metrics are used to evaluate the classification algorithms. The experimental results demonstrate that the classification performance depends on the performance matrix, the characteristics of datasets and the feature used.
Resumo:
The traditional searching method for model-order selection in linear regression is a nested full-parameters-set searching procedure over the desired orders, which we call full-model order selection. On the other hand, a method for model-selection searches for the best sub-model within each order. In this paper, we propose using the model-selection searching method for model-order selection, which we call partial-model order selection. We show by simulations that the proposed searching method gives better accuracies than the traditional one, especially for low signal-to-noise ratios over a wide range of model-order selection criteria (both information theoretic based and bootstrap-based). Also, we show that for some models the performance of the bootstrap-based criterion improves significantly by using the proposed partial-model selection searching method. Index Terms— Model order estimation, model selection, information theoretic criteria, bootstrap 1. INTRODUCTION Several model-order selection criteria can be applied to find the optimal order. Some of the more commonly used information theoretic-based procedures include Akaike’s information criterion (AIC) [1], corrected Akaike (AICc) [2], minimum description length (MDL) [3], normalized maximum likelihood (NML) [4], Hannan-Quinn criterion (HQC) [5], conditional model-order estimation (CME) [6], and the efficient detection criterion (EDC) [7]. From a practical point of view, it is difficult to decide which model order selection criterion to use. Many of them perform reasonably well when the signal-to-noise ratio (SNR) is high. The discrepancies in their performance, however, become more evident when the SNR is low. In those situations, the performance of the given technique is not only determined by the model structure (say a polynomial trend versus a Fourier series) but, more importantly, by the relative values of the parameters within the model. This makes the comparison between the model-order selection algorithms difficult as within the same model with a given order one could find an example for which one of the methods performs favourably well or fails [6, 8]. Our aim is to improve the performance of the model order selection criteria in cases where the SNR is low by considering a model-selection searching procedure that takes into account not only the full-model order search but also a partial model order search within the given model order. Understandably, the improvement in the performance of the model order estimation is at the expense of additional computational complexity.
Resumo:
This thesis investigates the coefficient of performance (COP) of a hybrid liquid desiccant solar cooling system. This hybrid cooling system includes three sections: 1) conventional air-conditioning section; 2) liquid desiccant dehumidification section and 3) air mixture section. The air handling unit (AHU) with mixture variable air volume design is included in the hybrid cooling system to control humidity. In the combined system, the air is first dehumidified in the dehumidifier and then mixed with ambient air by AHU before entering the evaporator. Experiments using lithium chloride as the liquid desiccant have been carried out for the performance evaluation of the dehumidifier and regenerator. Based on the air mixture (AHU) design, the electrical coefficient of performance (ECOP), thermal coefficient of performance (TCOP) and whole system coefficient of performance (COPsys) models used in the hybrid liquid desiccant solar cooing system were developed to evaluate this system performance. These mathematical models can be used to describe the coefficient of performance trend under different ambient conditions, while also providing a convenient comparison with conventional air conditioning systems. These models provide good explanations about the relationship between the performance predictions of models and ambient air parameters. The simulation results have revealed the coefficient of performance in hybrid liquid desiccant solar cooling systems substantially depends on ambient air and dehumidifier parameters. Also, the liquid desiccant experiments prove that the latent component of the total cooling load requirements can be easily fulfilled by using the liquid desiccant dehumidifier. While cooling requirements can be met, the liquid desiccant system is however still subject to the hysteresis problems.
Resumo:
While hybrid governance arrangements have been a major element of organisational architecture for some time, the contemporary operating environment has brought to the fore new conditions and expectations for the governance of entities that span conventional public sector departments, private firms and community organisations or groups. These conditions have resulted in a broader array of mixed governance configurations including Public Private Partnerships, alliances, and formal and informal collaborations. In some such arrangements, market based or ‘complete’ contractual relationships have been introduced to replace or supplement existing traditional ‘hierarchical’ and/or newer relational ‘network-oriented’ institutional associations. While there has been a greater reliance on collaborative or relational contracts as an underpinning institutional model, other modes of hierarchy and market may remain in operation. The success of these emergent hybrid forms has been mixed. There are examples of hybrids that have been well adopted, achieving the desired goals of efficiency, effectiveness and financial accountability; while others have experienced implementation problems which have undermined their results. This paper postulates that the cultural and institutional context within which hybrids operate may contribute to the implementation processes employed and the level of success attained. The paper explores hybrid arrangements through three cases of the use of inter-organisational arrangements in three different national contexts. Distilling the various elements of hybrids and the impact of institutional context will provide important insights for those charged with the responsibility for the formation and key infrastructure and public value development.
Resumo:
Daylighting in tropical and sub-tropical climates presents a unique challenge that is generally not well understood by designers. In a sub-tropical region such as Brisbane, Australia the majority of the year comprises of sunny clear skies with few overcast days and as a consequence windows can easily become sources of overheating and glare. The main strategy in dealing with this issue is extensive shading on windows. However, this in turn prevents daylight penetration into buildings often causing an interior to appear gloomy and dark even though there is more than sufficient daylight available. As a result electric lighting is the main source of light, even during the day. Innovative daylight devices which redirect light from windows offer a potential solution to this issue. These devices can potentially improve daylighting in buildings by increasing the illumination within the environment decreasing the high contrast between the window and work regions and deflecting potentially glare causing sunlight away from the observer. However, the performance of such innovative daylighting devices are generally quantified under overcast skies (i.e. daylight factors) or skies without sun, which are typical of European climates and are misleading when considering these devices for tropical or sub-tropical climates. This study sought to compare four innovative window daylighting devices in RADIANCE; light shelves, laser cut panels, micro-light guides and light redirecting blinds. These devices were simulated in RADIANCE under sub-tropical skies (for Brisbane) within the test case of a typical CBD office space. For each device the quantity of light redirected and its distribution within the space was used as the basis for comparison. In addition, glare analysis on each device was conducted using Weinold and Christoffersons evalglare. The analysis was conducted for selected hours for a day in each season. The majority of buildings that humans will occupy in their lifetime are already constructed, and extensive remodelling of most of these buildings is unlikely. Therefore the most effective way to improve daylighting in the near future will be through the alteration existing window spaces. Thus it will be important to understand the performance of daylighting systems with respect to the climate it is to be used in. This type of analysis is important to determine the applicability of a daylighting strategy so that designers can achieve energy efficiency as well the health benefits of natural daylight.