308 resultados para Reasonable profits


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quality, in construction projects should be regarded as the fulfillment of expectation of those contributors involved in such projects. Although a significant amount of quality practices have been introduced within the industry, attainment of reasonable levels of quality in construction projects continues to be an ongoing problem. To date, some research into the introduction and improvement of quality practices and stakeholder management has been undertaken, but so far no major studies have been completed that comprehensively examine how greater consideration of stakeholders’ perspectives of quality can be used to contribute to final project quality outcomes. This paper aims to examine the requirements for development of a framework leading to more effective involvement of stakeholders in quality planning and practices thus ultimately contributing to higher quality outcomes for construction projects. Through an extensive literature review it highlights various perceptions of quality, categorizes quality issues with particular focus on benefits and shortcomings and also examines the viewpoints of major stakeholders on project quality. It proposes a set of criteria to be used as a basis for a quality practice improvement framework, which will provide project managers and owners with the required information and strategic direction to achieve their own and their stakeholders’ targets for implementation of quality practices leading to the achievement of improved quality outcomes on future projects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to jointly assess the impact of regulatory reform for corporate fundraising in Australia (CLERP Act 1999) and the relaxation of ASX admission rules in 1999, on the accuracy of management earnings forecasts in initial public offer (IPO) prospectuses. The relaxation of ASX listing rules permitted a new category of new economy firms (commitments test entities (CTEs))to list without a prior history of profitability, while the CLERP Act (introduced in 2000) was accompanied by tighter disclosure obligations and stronger enforcement action by the corporate regulator (ASIC). Design/methodology/approach – All IPO earnings forecasts in prospectuses lodged between 1998 and 2003 are examined to assess the pre- and post-CLERP Act impact. Based on active ASIC enforcement action in the post-reform period, IPO firms are hypothesised to provide more accurate forecasts, particularly CTE firms, which are less likely to have a reasonable basis for forecasting. Research models are developed to empirically test the impact of the reforms on CTE and non-CTE IPO firms. Findings – The new regulatory environment has had a positive impact on management forecasting behaviour. In the post-CLERP Act period, the accuracy of prospectus forecasts and their revisions significantly improved and, as expected, the results are primarily driven by CTE firms. However, the majority of prospectus forecasts continue to be materially inaccurate. Originality/value – The results highlight the need to control for both the changing nature of listed firms and the level of enforcement action when examining responses to regulatory changes to corporate fundraising activities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Based on the AFM-bending experiments, a molecular dynamics (MD) bending simulation model is established which could accurately account for the full spectrum of the mechanical properties of NWs in a double clamped beam configuration, ranging from elasticity to plasticity and failure. It is found that, loading rate exerts significant influence to the mechanical behaviours of nanowires (NWs). Specifically, a loading rate lower than 10 m/s is found reasonable for a homogonous bending deformation. Both loading rate and potential between the tip and the NW are found to play an important role in the adhesive phenomenon. The force versus displacement (F-d) curve from MD simulation is highly consistent in shapes with that from experiments. Symmetrical F-d curves during loading and unloading processes are observed, which reveal the linear-elastic and non-elastic bending deformation of NWs. The typical bending induced tensile-compressive features are observed. Meanwhile, the simulation results are excellently fitted by the classical Euler-Bernoulli beam theory with axial effect. It is concluded that, axial tensile force becomes crucial in bending deformation when the beam size is down to nanoscale for double clamped NWs. In addition, we find shorter NWs will have an earlier yielding and a larger yielding force. Mechanical properties (Young’s modulus & yield strength) obtained from both bending and tensile deformations are found comparable with each other. Specifically, the modulus is essentially similar under these two loading methods, while the yield strength during bending is observed larger than that during tension.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Demands for delivering high instantaneous power in a compressed form (pulse shape) have widely increased during recent decades. The flexible shapes with variable pulse specifications offered by pulsed power have made it a practical and effective supply method for an extensive range of applications. In particular, the release of basic subatomic particles (i.e. electron, proton and neutron) in an atom (ionization process) and the synthesizing of molecules to form ions or other molecules are among those reactions that necessitate large amount of instantaneous power. In addition to the decomposition process, there have recently been requests for pulsed power in other areas such as in the combination of molecules (i.e. fusion, material joining), gessoes radiations (i.e. electron beams, laser, and radar), explosions (i.e. concrete recycling), wastewater, exhausted gas, and material surface treatments. These pulses are widely employed in the silent discharge process in all types of materials (including gas, fluid and solid); in some cases, to form the plasma and consequently accelerate the associated process. Due to this fast growing demand for pulsed power in industrial and environmental applications, the exigency of having more efficient and flexible pulse modulators is now receiving greater consideration. Sensitive applications, such as plasma fusion and laser guns also require more precisely produced repetitive pulses with a higher quality. Many research studies are being conducted in different areas that need a flexible pulse modulator to vary pulse features to investigate the influence of these variations on the application. In addition, there is the need to prevent the waste of a considerable amount of energy caused by the arc phenomena that frequently occur after the plasma process. The control over power flow during the supply process is a critical skill that enables the pulse supply to halt the supply process at any stage. Different pulse modulators which utilise different accumulation techniques including Marx Generators (MG), Magnetic Pulse Compressors (MPC), Pulse Forming Networks (PFN) and Multistage Blumlein Lines (MBL) are currently employed to supply a wide range of applications. Gas/Magnetic switching technologies (such as spark gap and hydrogen thyratron) have conventionally been used as switching devices in pulse modulator structures because of their high voltage ratings and considerably low rising times. However, they also suffer from serious drawbacks such as, their low efficiency, reliability and repetition rate, and also their short life span. Being bulky, heavy and expensive are the other disadvantages associated with these devices. Recently developed solid-state switching technology is an appropriate substitution for these switching devices due to the benefits they bring to the pulse supplies. Besides being compact, efficient, reasonable and reliable, and having a long life span, their high frequency switching skill allows repetitive operation of pulsed power supply. The main concerns in using solid-state transistors are the voltage rating and the rising time of available switches that, in some cases, cannot satisfy the application’s requirements. However, there are several power electronics configurations and techniques that make solid-state utilisation feasible for high voltage pulse generation. Therefore, the design and development of novel methods and topologies with higher efficiency and flexibility for pulsed power generators have been considered as the main scope of this research work. This aim is pursued through several innovative proposals that can be classified under the following two principal objectives. • To innovate and develop novel solid-state based topologies for pulsed power generation • To improve available technologies that have the potential to accommodate solid-state technology by revising, reconfiguring and adjusting their structure and control algorithms. The quest to distinguish novel topologies for a proper pulsed power production was begun with a deep and through review of conventional pulse generators and useful power electronics topologies. As a result of this study, it appears that efficiency and flexibility are the most significant demands of plasma applications that have not been met by state-of-the-art methods. Many solid-state based configurations were considered and simulated in order to evaluate their potential to be utilised in the pulsed power area. Parts of this literature review are documented in Chapter 1 of this thesis. Current source topologies demonstrate valuable advantages in supplying the loads with capacitive characteristics such as plasma applications. To investigate the influence of switching transients associated with solid-state devices on rise time of pulses, simulation based studies have been undertaken. A variable current source is considered to pump different current levels to a capacitive load, and it was evident that dissimilar dv/dts are produced at the output. Thereby, transient effects on pulse rising time are denied regarding the evidence acquired from this examination. A detailed report of this study is given in Chapter 6 of this thesis. This study inspired the design of a solid-state based topology that take advantage of both current and voltage sources. A series of switch-resistor-capacitor units at the output splits the produced voltage to lower levels, so it can be shared by the switches. A smart but complicated switching strategy is also designed to discharge the residual energy after each supply cycle. To prevent reverse power flow and to reduce the complexity of the control algorithm in this system, the resistors in common paths of units are substituted with diode rectifiers (switch-diode-capacitor). This modification not only gives the feasibility of stopping the load supply process to the supplier at any stage (and consequently saving energy), but also enables the converter to operate in a two-stroke mode with asymmetrical capacitors. The components’ determination and exchanging energy calculations are accomplished with respect to application specifications and demands. Both topologies were simply modelled and simulation studies have been carried out with the simplified models. Experimental assessments were also executed on implemented hardware and the approaches verified the initial analysis. Reports on details of both converters are thoroughly discussed in Chapters 2 and 3 of the thesis. Conventional MGs have been recently modified to use solid-state transistors (i.e. Insulated gate bipolar transistors) instead of magnetic/gas switching devices. Resistive insulators previously used in their structures are substituted by diode rectifiers to adjust MGs for a proper voltage sharing. However, despite utilizing solid-state technology in MGs configurations, further design and control amendments can still be made to achieve an improved performance with fewer components. Considering a number of charging techniques, resonant phenomenon is adopted in a proposal to charge the capacitors. In addition to charging the capacitors at twice the input voltage, triggering switches at the moment at which the conducted current through switches is zero significantly reduces the switching losses. Another configuration is also introduced in this research for Marx topology based on commutation circuits that use a current source to charge the capacitors. According to this design, diode-capacitor units, each including two Marx stages, are connected in cascade through solid-state devices and aggregate the voltages across the capacitors to produce a high voltage pulse. The polarity of voltage across one capacitor in each unit is reversed in an intermediate mode by connecting the commutation circuit to the capacitor. The insulation of input side from load side is provided in this topology by disconnecting the load from the current source during the supply process. Furthermore, the number of required fast switching devices in both designs is reduced to half of the number used in a conventional MG; they are replaced with slower switches (such as Thyristors) that need simpler driving modules. In addition, the contributing switches in discharging paths are decreased to half; this decrease leads to a reduction in conduction losses. Associated models are simulated, and hardware tests are performed to verify the validity of proposed topologies. Chapters 4, 5 and 7 of the thesis present all relevant analysis and approaches according to these topologies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite promising benefits and advantages, there are reports of failures and low realisation of benefits in Enterprise System (ES) initiatives. Among the research on the factors that influence ES success, there is a dearth of studies on the knowledge implications of multiple end-user groups using the same ES application. An ES facilitates the work of several user groups, ranging from strategic management, management, to operational staff, all using the same system for multiple objectives. Given the fundamental characteristics of ES – integration of modules, business process views, and aspects of information transparency – it is necessary that all frequent end-users share a reasonable amount of common knowledge and integrate their knowledge to yield new knowledge. Recent literature on ES implementation highlights the importance of Knowledge Integration (KI) for implementation success. Unfortunately, the importance of KI is often overlooked and little about the role of KI in ES success is known. Many organisations do not achieve the potential benefits from their ES investment because they do not consider the need or their ability to integrate their employees’ knowledge. This study is designed to improve our understanding of the influence of KI among ES end-users on operational ES success. The three objectives of the study are: (I) to identify and validate the antecedents of KI effectiveness, (II) to investigate the impact of KI effectiveness on the goodness of individuals’ ES-knowledge base, and (III) to examine the impact of the goodness of individuals’ ES-knowledge base on the operational ES success. For this purpose, we employ the KI factors identified by Grant (1996) and an IS-impact measurement model from the work of Gable et al. (2008) to examine ES success. The study derives its findings from data gathered from six Malaysian companies in order to obtain the three-fold goal of this thesis as outlined above. The relationships between the antecedents of KI effectiveness and its consequences are tested using 188 responses to a survey representing the views of management and operational employment cohorts. Using statistical methods, we confirm three antecedents of KI effectiveness and the consequences of the antecedents on ES success are validated. The findings demonstrate a statistically positive impact of KI effectiveness of ES success, with KI effectiveness contributing to almost one-third of ES success. This research makes a number of contributions to the understanding of the influence of KI on ES success. First, based on the empirical work using a complete nomological net model, the role of KI effectiveness on ES success is evidenced. Second, the model provides a theoretical lens for a more comprehensive understanding of the impact of KI on the level of ES success. Third, restructuring the dimensions of the knowledge-based theory to fit the context of ES extends its applicability and generalisability to contemporary Information Systems. Fourth, the study develops and validates measures for the antecedents of KI effectiveness. Fifth, the study demonstrates the statistically significant positive influence of the goodness of KI on ES success. From a practical viewpoint, this study emphasises the importance of KI effectiveness as a direct antecedent of ES success. Practical lessons can be drawn from the work done in this study to empirically identify the critical factors among the antecedents of KI effectiveness that should be given attention.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bioinformatics involves analyses of biological data such as DNA sequences, microarrays and protein-protein interaction (PPI) networks. Its two main objectives are the identification of genes or proteins and the prediction of their functions. Biological data often contain uncertain and imprecise information. Fuzzy theory provides useful tools to deal with this type of information, hence has played an important role in analyses of biological data. In this thesis, we aim to develop some new fuzzy techniques and apply them on DNA microarrays and PPI networks. We will focus on three problems: (1) clustering of microarrays; (2) identification of disease-associated genes in microarrays; and (3) identification of protein complexes in PPI networks. The first part of the thesis aims to detect, by the fuzzy C-means (FCM) method, clustering structures in DNA microarrays corrupted by noise. Because of the presence of noise, some clustering structures found in random data may not have any biological significance. In this part, we propose to combine the FCM with the empirical mode decomposition (EMD) for clustering microarray data. The purpose of EMD is to reduce, preferably to remove, the effect of noise, resulting in what is known as denoised data. We call this method the fuzzy C-means method with empirical mode decomposition (FCM-EMD). We applied this method on yeast and serum microarrays, and the silhouette values are used for assessment of the quality of clustering. The results indicate that the clustering structures of denoised data are more reasonable, implying that genes have tighter association with their clusters. Furthermore we found that the estimation of the fuzzy parameter m, which is a difficult step, can be avoided to some extent by analysing denoised microarray data. The second part aims to identify disease-associated genes from DNA microarray data which are generated under different conditions, e.g., patients and normal people. We developed a type-2 fuzzy membership (FM) function for identification of diseaseassociated genes. This approach is applied to diabetes and lung cancer data, and a comparison with the original FM test was carried out. Among the ten best-ranked genes of diabetes identified by the type-2 FM test, seven genes have been confirmed as diabetes-associated genes according to gene description information in Gene Bank and the published literature. An additional gene is further identified. Among the ten best-ranked genes identified in lung cancer data, seven are confirmed that they are associated with lung cancer or its treatment. The type-2 FM-d values are significantly different, which makes the identifications more convincing than the original FM test. The third part of the thesis aims to identify protein complexes in large interaction networks. Identification of protein complexes is crucial to understand the principles of cellular organisation and to predict protein functions. In this part, we proposed a novel method which combines the fuzzy clustering method and interaction probability to identify the overlapping and non-overlapping community structures in PPI networks, then to detect protein complexes in these sub-networks. Our method is based on both the fuzzy relation model and the graph model. We applied the method on several PPI networks and compared with a popular protein complex identification method, the clique percolation method. For the same data, we detected more protein complexes. We also applied our method on two social networks. The results showed our method works well for detecting sub-networks and give a reasonable understanding of these communities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

‘Top Ten Box Office Blockbusters in Dollars’, is an ongoing series of works that represent the production budgets and worldwide gross profits of the top ten grossing films of all time. By displaying this data on top of the full running time of each blockbuster, the viewer’s attention is drawn back and forth between the amassing dollar figures, and the original film’s highly polished presentation. In doing so, the work aims to provide a new opportunity to enjoy these immensely popular films with a new sense of value. The exhibition was selected for the Artistic Program at MetroArts, Brisbane in 2010

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate how differences in the goals of male and female entrepreneurs affect business resources, outcomes and satisfaction with those outcomes. To investigate this topic we use the CAUSEE database to access a longitudinal sample of 247 female-controlled and 332 male-controlled young Australian firms. We find that female entrepreneurs are less motivated by business growth, invest less time developing their businesses and yet even when profits are lower they are more satisfied with their profit performance. Our results support prior qualitative studies indicating that female business owners want greater flexibility and manageability in terms of balancing their family and work responsibilities. Our findings also suggest that future dialogue on firm performance should include an analysis of the entrepreneur’s achievement in terms of both financial and personal goals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The removal of the sulfate anion from water using synthetic hydrotalcite (Mg/Al LDH) was investigated using powder x-ray diffraction (XRD) and thermogravimetric analysis (TG). Synthetic hydrotalcite Mg6Al2(OH)16(CO3)∙4H2O was prepared by the co-precipitation method from aluminum and magnesium chloride salts. The synthetic hydrotalcite was thermally activated to a maximum temperature of 380°C. Samples of thermally activated hydrotalcite where then treated with aliquots of 1000ppm sulfate solution. The resulting products where dried and characterized by XRD and TG. Powder XRD revealed that hydrotalcite had been successfully prepared and that the product obtained after treatment with sulfate solution also conformed well to the reference pattern of hydrotalcite. The d(003) spacing of all samples was found to be within the acceptable region for a LDH structure. TG revealed all products underwent a similar decomposition to that of hydrotalcite. It was possible to propose a reasonable mechanism for the thermal decomposition of a sulfate containing Mg/Al LDH. The similarities in the results may indicate that the reformed hydrotalcite may contain carbonate anion as well as sulfate. Further investigation is required to confirm this.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A modified microstrip-fed planar monopole antenna with open circuited coupled line is presented in this paper. The operational bandwidth of the proposed antenna covers the 2.4 GHz ISM band (2.42-2.48 GHz) and the 5 GHz WLAN band (5 GHz to 6 GHz). The radiating elements occupy a small area of 23×8 mm2. The Finite Difference Time Domain method is used to predict the input impedance of the antenna. The calculated return loss shows very good agreement with measured data. Reasonable antenna gain is observed across the operating band. The measured radiation patterns are similar to those of a simple monopole antenna.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In information retrieval (IR) research, more and more focus has been placed on optimizing a query language model by detecting and estimating the dependencies between the query and the observed terms occurring in the selected relevance feedback documents. In this paper, we propose a novel Aspect Language Modeling framework featuring term association acquisition, document segmentation, query decomposition, and an Aspect Model (AM) for parameter optimization. Through the proposed framework, we advance the theory and practice of applying high-order and context-sensitive term relationships to IR. We first decompose a query into subsets of query terms. Then we segment the relevance feedback documents into chunks using multiple sliding windows. Finally we discover the higher order term associations, that is, the terms in these chunks with high degree of association to the subsets of the query. In this process, we adopt an approach by combining the AM with the Association Rule (AR) mining. In our approach, the AM not only considers the subsets of a query as “hidden” states and estimates their prior distributions, but also evaluates the dependencies between the subsets of a query and the observed terms extracted from the chunks of feedback documents. The AR provides a reasonable initial estimation of the high-order term associations by discovering the associated rules from the document chunks. Experimental results on various TREC collections verify the effectiveness of our approach, which significantly outperforms a baseline language model and two state-of-the-art query language models namely the Relevance Model and the Information Flow model

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Summaries of legal cases, legislation and developments in law and accounting relevant to nonprofit organisations and charity law during 2011; including articles on special issues such as accounting standards and the chart of accounts; law reform (e.g. the new national regulator, the Australian Charities and Not-for-profits Commission); and taxation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A range of varying chromophore nitroxide free radicals and their nonradical methoxyamine analogues were synthesized and their linear photophysical properties examined. The presence of the proximate free radical masks the chromophore’s usual fluorescence emission, and these species are described as profluorescent. Two nitroxides incorporating anthracene and fluorescein chromophores (compounds 7 and 19, respectively) exhibited two-photon absorption (2PA) cross sections of approximately 400 G.M. when excited at wavelengths greater than 800 nm. Both of these profluorescent nitroxides demonstrated low cytotoxicity toward Chinese hamster ovary (CHO) cells. Imaging colocalization experiments with the commercially available CellROX Deep Red oxidative stress monitor demonstrated good cellular uptake of the nitroxide probes. Sensitivity of the nitroxide probes to H2O2-induced damage was also demonstrated by both one- and two-photon fluorescence microscopy. These profluorescent nitroxide probes are potentially powerful tools for imaging oxidative stress in biological systems, and they essentially “light up” in the presence of certain species generated from oxidative stress. The high ratio of the fluorescence quantum yield between the profluorescent nitroxide species and their nonradical adducts provides the sensitivity required for measuring a range of cellular redox environments. Furthermore, their reasonable 2PA cross sections provide for the option of using two-photon fluorescence microscopy, which circumvents commonly encountered disadvantages associated with one-photon imaging such as photobleaching and poor tissue penetration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This project investigates machine listening and improvisation in interactive music systems with the goal of improvising musically appropriate accompaniment to an audio stream in real-time. The input audio may be from a live musical ensemble, or playback of a recording for use by a DJ. I present a collection of robust techniques for machine listening in the context of Western popular dance music genres, and strategies of improvisation to allow for intuitive and musically salient interaction in live performance. The findings are embodied in a computational agent – the Jambot – capable of real-time musical improvisation in an ensemble setting. Conceptually the agent’s functionality is split into three domains: reception, analysis and generation. The project has resulted in novel techniques for addressing a range of issues in each of these domains. In the reception domain I present a novel suite of onset detection algorithms for real-time detection and classification of percussive onsets. This suite achieves reasonable discrimination between the kick, snare and hi-hat attacks of a standard drum-kit, with sufficiently low-latency to allow perceptually simultaneous triggering of accompaniment notes. The onset detection algorithms are designed to operate in the context of complex polyphonic audio. In the analysis domain I present novel beat-tracking and metre-induction algorithms that operate in real-time and are responsive to change in a live setting. I also present a novel analytic model of rhythm, based on musically salient features. This model informs the generation process, affording intuitive parametric control and allowing for the creation of a broad range of interesting rhythms. In the generation domain I present a novel improvisatory architecture drawing on theories of music perception, which provides a mechanism for the real-time generation of complementary accompaniment in an ensemble setting. All of these innovations have been combined into a computational agent – the Jambot, which is capable of producing improvised percussive musical accompaniment to an audio stream in real-time. I situate the architectural philosophy of the Jambot within contemporary debate regarding the nature of cognition and artificial intelligence, and argue for an approach to algorithmic improvisation that privileges the minimisation of cognitive dissonance in human-computer interaction. This thesis contains extensive written discussions of the Jambot and its component algorithms, along with some comparative analyses of aspects of its operation and aesthetic evaluations of its output. The accompanying CD contains the Jambot software, along with video documentation of experiments and performances conducted during the project.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: We provide an account of the relationships between eye shape, retinal shape and peripheral refraction. Recent findings: We discuss how eye and retinal shapes may be described as conicoids, and we describe an axis and section reference system for determining shapes. Explanations are given of how patterns of retinal expansion during the development of myopia may contribute to changing patterns of peripheral refraction, and how pre-existing retinal shape might contribute to the development of myopia. Direct and indirect techniques for determining eye and retinal shape are described, and results are discussed. There is reasonable consistency in the literature of eye length increasing at a greater rate than height and width as the degree of myopia increases, so that eyes may be described as changing from oblate/spherical shapes to prolate shapes. However, one study indicates that the retina itself, while showing the same trend, remains oblate in shape for most eyes (discounting high myopia). Eye shape and retinal shape are not the same and merely describing an eye shape as being prolate or oblate is insufficient without some understanding of the parameters contributing to this; in myopia a prolate eye shape is likely to involve both a steepening retina near the posterior pole combined with a flattening (or a reduction in steepening compared with an emmetrope) away from the pole.