508 resultados para Analyst
Resumo:
Open pit mine operations are complex businesses that demand a constant assessment of risk. This is because the value of a mine project is typically influenced by many underlying economic and physical uncertainties, such as metal prices, metal grades, costs, schedules, quantities, and environmental issues, among others, which are not known with much certainty at the beginning of the project. Hence, mining projects present a considerable challenge to those involved in associated investment decisions, such as the owners of the mine and other stakeholders. In general terms, when an option exists to acquire a new or operating mining project, , the owners and stock holders of the mine project need to know the value of the mining project, which is the fundamental criterion for making final decisions about going ahead with the venture capital. However, obtaining the mine project’s value is not an easy task. The reason for this is that sophisticated valuation and mine optimisation techniques, which combine advanced theories in geostatistics, statistics, engineering, economics and finance, among others, need to be used by the mine analyst or mine planner in order to assess and quantify the existing uncertainty and, consequently, the risk involved in the project investment. Furthermore, current valuation and mine optimisation techniques do not complement each other. That is valuation techniques based on real options (RO) analysis assume an expected (constant) metal grade and ore tonnage during a specified period, while mine optimisation (MO) techniques assume expected (constant) metal prices and mining costs. These assumptions are not totally correct since both sources of uncertainty—that of the orebody (metal grade and reserves of mineral), and that about the future behaviour of metal prices and mining costs—are the ones that have great impact on the value of any mining project. Consequently, the key objective of this thesis is twofold. The first objective consists of analysing and understanding the main sources of uncertainty in an open pit mining project, such as the orebody (in situ metal grade), mining costs and metal price uncertainties, and their effect on the final project value. The second objective consists of breaking down the wall of isolation between economic valuation and mine optimisation techniques in order to generate a novel open pit mine evaluation framework called the ―Integrated Valuation / Optimisation Framework (IVOF)‖. One important characteristic of this new framework is that it incorporates the RO and MO valuation techniques into a single integrated process that quantifies and describes uncertainty and risk in a mine project evaluation process, giving a more realistic estimate of the project’s value. To achieve this, novel and advanced engineering and econometric methods are used to integrate financial and geological uncertainty into dynamic risk forecasting measures. The proposed mine valuation/optimisation technique is then applied to a real gold disseminated open pit mine deposit to estimate its value in the face of orebody, mining costs and metal price uncertainties.
Resumo:
Process modeling is an important design practice in organizational improvement projects. In this paper, we examine the design of business process diagrams in contexts where novice analysts only have basic design tools such as paper and pencils available, and little to no understanding of formalized modeling approaches. Based on a quasi-experimental study with 89 BPM students, we identify five distinct process design archetypes ranging from textual to hybrid and graphical representation forms. We examine the quality of the designs and identify which representation formats enable an analyst to articulate business rules, states, events, activities, temporal and geospatial information in a process model. We found that the quality of the process designs decreases with the increased use of graphics and that hybrid designs featuring appropriate text labels and abstract graphical forms appear well-suited to describe business processes. We further examine how process design preferences predict formalized process modeling ability. Our research has implications for practical process design work in industry as well as for academic curricula on process design.
Resumo:
Nonprofit organizations present the analyst with a slew of puzzles. To an economist conditioned to think in terms of objectives and constraints, even the mathematical definition of the beast is a problem. What is a nonprofit organization? How does this definition shape the elaboration of objectives and constraints?
Resumo:
Purpose Managers generally have discretion in determining how components of earnings are presented in financial statements in distinguishing between ‘normal’ earnings and items classified as unusual, special, significant, exceptional or abnormal. Prior research has found that such intra-period classificatory choice is used as a form of earnings management. Prior to 2001, Australian accounting standards mandated that unusually large items of revenue and expense be classified as ‘abnormal items’ for financial reporting, but this classification was removed from accounting standards from 2001. This move by the regulators was partly in response to concerns that the abnormal classification was being used opportunistically to manage reported pre-abnormal earnings. This study extends the earnings management literature by examining the reporting of abnormal items for evidence of intra-period classificatory earnings management in the unique Australian setting. Design/methodology/approach This study investigates associations between reporting of abnormal items and incentives in the form of analyst following and the earnings benchmarks of analysts’ forecasts, earnings levels, and earnings changes, for a sample of Australian top-500 firms for the seven-year period from 1994 to 2000. Findings The findings suggest there are systematic differences between firms reporting abnormal items and those with no abnormal items. Results show evidence that, on average, firms shifted expense items from pre-abnormal earnings to bottom line net income through reclassification as abnormal losses. Originality/value These findings suggest that the standard setters were justified in removing the ‘abnormal’ classification from the accounting standard. However, it cannot be assumed that all firms acted opportunistically in the classification of items as abnormal. With the removal of the standardised classification of items outside normal operations as ‘abnormal’, firms lost the opportunity to use such disclosures as a signalling device, with the consequential effect of limiting the scope of effectively communicating information about the nature of items presented in financial reports.
Resumo:
The Australian Securities Exchange (ASX) listing rule 3.1 requires listed companies to immediately disclose price sensitive information to the market via the ASX’s Company Announcements Platform (CAP) prior to release through other disclosure channels. Since 1999, to improve the communication process, the ASX has permitted third-party mediation in the disclosure process that leads to the release of an Open Briefing (OB) through CAP. An OB is an interview between senior executives of the firm and an Open Briefing analyst employed by Orient Capital Pty Ltd (broaching topics such as current profit and outlook). Motivated by an absence of research on factors that influence firms to use OBs as a discretionary disclosure channel, this study examines (1) Why do firms choose to release information to the market via OBs?, (2) What are the firm characteristics that explain the discretionary use of OBs as a disclosure channel?, and (3) What are the disclosure attributes that influence firms’ decisions to regularly use OBs as a disclosure channel? Based on agency and information economics theories, a theoretical framework is developed to address research questions. This theoretical framework comprises disclosure environments such as firm characteristics and external factors, disclosure attributes and disclosure consequences. In order to address the first research question, the study investigates (i) the purpose of using OBs, (2) whether firms use OBs to provide information relating to previous public announcements, and (3) whether firms use OBs to provide routine or non-routine disclosures. In relation to the second and third research questions, hypotheses are developed to test factors expected to explain the discretionary use of OBs and firms’ decisions to regularly use OBs, and to explore the factors influencing the nature of OB disclosure. Content analysis and logistic regression models are used to investigate the research questions and test the hypotheses. Data are drawn from a hand-collected population of 1863 OB announcements issued by 239 listed firms between 2000 and 2010. The results show that types of information disclosed via an OB announcement are principally on matters relating to corporate strategies and performance and outlook. Most OB announcements are linked with a previous related announcement, with the lag between announcements significantly longer for loss-making firms than profitmaking firms. The main results show that firms which tend to be larger, have an analyst following, and have higher growth opportunities, are more likely to release OBs. Further, older firms and firms that release OB announcements containing good news, historical information and less complex information tend to be regular OB users. Lastly, firms more likely to disclose strategic information via OBs tend to operate in industries facing greater uncertainty, do not have analysts following, and have higher growth opportunities are less likely to disclose good news, historical information and complex information via OBs. This study is expected to contribute to disclosure literature in terms of disclosure attributes and firm characteristics that influence behaviour in this unique (OB) disclosure channel. With regard to practical significance, regulators can gain an understanding of how OBs are disclosed which can assist them in monitoring the use of OBs and improving the effectiveness of communications with stakeholders. In addition, investors can have a better comprehension of information contained in OB announcements, which may in turn better facilitate their investment decisions.
Resumo:
The state of the practice in safety has advanced rapidly in recent years with the emergence of new tools and processes for improving selection of the most cost-effective safety countermeasures. However, many challenges prevent fair and objective comparisons of countermeasures applied across safety disciplines (e.g. engineering, emergency services, and behavioral measures). These countermeasures operate at different spatial scales, are funded often by different financial sources and agencies, and have associated costs and benefits that are difficult to estimate. This research proposes a methodology by which both behavioral and engineering safety investments are considered and compared in a specific local context. The methodology involves a multi-stage process that enables the analyst to select countermeasures that yield high benefits to costs, are targeted for a particular project, and that may involve costs and benefits that accrue over varying spatial and temporal scales. The methodology is illustrated using a case study from the Geary Boulevard Corridor in San Francisco, California. The case study illustrates that: 1) The methodology enables the identification and assessment of a wide range of safety investment types at the project level; 2) The nature of crash histories lend themselves to the selection of both behavioral and engineering investments, requiring cooperation across agencies; and 3) The results of the cost-benefit analysis are highly sensitive to cost and benefit assumptions, and thus listing and justification of all assumptions is required. It is recommended that a sensitivity analyses be conducted when there is large uncertainty surrounding cost and benefit assumptions.
Resumo:
In this study, the promising metabolomic approach integrating with ingenuity pathway analysis (IPA) was applied to characterize the tissue specific metabolic perturbation of rats that was induced by indomethacin. The selective pattern recognition analyses were applied to analyze global metabolic profiling of urine of rats treated by indomethacin at an acute dosage of reference that has been proven to induce tissue disorders in rats, evaluated throughout the time-course of -24-72 h. The results preliminarily revealed that modifications of amino acid metabolism, fatty acid metabolism and energetically associated metabolic pathways accounted for metabolic perturbation of the rats that was induced by indomethacin. Furthermore, IPA was applied to deeply analyze the biomarkers and their relations with the metabolic perturbations evidenced by pattern recognition analyses. Specific biochemical functions affected by indomethacin suggested that there is an important correlation of its effects in kidney and liver metabolism, based on the determined metabolites and their pathway-based analysis. The IPA correlation of the three major biomarkers, identified as creatinine, prostaglandin E2 and guanosine, suggested that the administration of indomethacin induced certain levels of toxicity in the kidneys and liver. The changes in the levels of biomarker metabolites allowed the phenotypical determination of the metabolic perturbations induced by indomethacin in a time-dependent manner.
Resumo:
Metabolomic profiling offers direct insights into the chemical environment and metabolic pathway activities at sites of human disease. During infection, this environment may receive important contributions from both host and pathogen. Here we apply an untargeted metabolomics approach to identify compounds associated with an E. coli urinary tract infection population. Correlative and structural data from minimally processed samples were obtained using an optimized LC-MS platform capable of resolving ~2300 molecular features. Principal component analysis readily distinguished patient groups and multiple supervised chemometric analyses resolved robust metabolomic shifts between groups. These analyses revealed nine compounds whose provisional structures suggest candidate infection-associated endocrine, catabolic, and lipid pathways. Several of these metabolite signatures may derive from microbial processing of host metabolites. Overall, this study highlights the ability of metabolomic approaches to directly identify compounds encountered by, and produced from, bacterial pathogens within human hosts.
Resumo:
Postgraduate candidates in the creative arts encounter unique challenges when writing an exegesis (the written document that accompanies creative work as a thesis). As a practitioner-researcher, they must adopt a dual perspective–looking out towards an established field of research, exemplars and theories, as well as inwards towards their experiential creative processes and practice. This dual orientation provides clear benefits, for it enables them to situate the research within its field and make objective claims for the research methodologies and outcomes while maintaining an intimate, voiced relationship with the practice. However, a dual orientation introduces considerable complexities in the writing. It requires a reconciliation of multi-perspectival subject positions: the disinterested academic posture of the observer/ethnographer/analyst/theorist at times; and the invested, subjective stance the practitioner/producer at others. It requires the author to negotiate a range of writing styles and speech genres–from the formal, polemical style of the theorist to the personal, questioning and emotive voice of reflexivity. Moreover, these multi-variant orientations, subject positions, styles and voices must be integrated into a unified and coherent text. In this chapter I offer a conceptual framework and strategies for approaching this relatively new genre of thesis. I begin by summarizing the characteristics of what has begun to emerge as the predominant model of exegesis (the dual-oriented ‘Connective’ exegesis). Framing it against theoretical and philosophical understandings of polyvocality and matrixicality, I go on to point to recent textual models that provide precedents for connecting differently oriented perspectives, subjectivities and voices. I then turn to emergent archives of practice-led research to explain how the challenge of writing a ‘Connective’ exegesis has so far been resolved by higher degree research (HDR) candidates. Exemplars illustrate a range of strategies they have used to compose a multi-perspectival text, reconcile the divergent subject positions of the practitioner researcher, and harmonize the speech genres of a ployvocal text.
Resumo:
Cloud computing is an emerging computing paradigm in which IT resources are provided over the Internet as a service to users. One such service offered through the Cloud is Software as a Service or SaaS. SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. SaaS is receiving substantial attention today from both software providers and users. It is also predicted to has positive future markets by analyst firms. This raises new challenges for SaaS providers managing SaaS, especially in large-scale data centres like Cloud. One of the challenges is providing management of Cloud resources for SaaS which guarantees maintaining SaaS performance while optimising resources use. Extensive research on the resource optimisation of Cloud service has not yet addressed the challenges of managing resources for composite SaaS. This research addresses this gap by focusing on three new problems of composite SaaS: placement, clustering and scalability. The overall aim is to develop efficient and scalable mechanisms that facilitate the delivery of high performance composite SaaS for users while optimising the resources used. All three problems are characterised as highly constrained, large-scaled and complex combinatorial optimisation problems. Therefore, evolutionary algorithms are adopted as the main technique in solving these problems. The first research problem refers to how a composite SaaS is placed onto Cloud servers to optimise its performance while satisfying the SaaS resource and response time constraints. Existing research on this problem often ignores the dependencies between components and considers placement of a homogenous type of component only. A precise problem formulation of composite SaaS placement problem is presented. A classical genetic algorithm and two versions of cooperative co-evolutionary algorithms are designed to now manage the placement of heterogeneous types of SaaS components together with their dependencies, requirements and constraints. Experimental results demonstrate the efficiency and scalability of these new algorithms. In the second problem, SaaS components are assumed to be already running on Cloud virtual machines (VMs). However, due to the environment of a Cloud, the current placement may need to be modified. Existing techniques focused mostly at the infrastructure level instead of the application level. This research addressed the problem at the application level by clustering suitable components to VMs to optimise the resource used and to maintain the SaaS performance. Two versions of grouping genetic algorithms (GGAs) are designed to cater for the structural group of a composite SaaS. The first GGA used a repair-based method while the second used a penalty-based method to handle the problem constraints. The experimental results confirmed that the GGAs always produced a better reconfiguration placement plan compared with a common heuristic for clustering problems. The third research problem deals with the replication or deletion of SaaS instances in coping with the SaaS workload. To determine a scaling plan that can minimise the resource used and maintain the SaaS performance is a critical task. Additionally, the problem consists of constraints and interdependency between components, making solutions even more difficult to find. A hybrid genetic algorithm (HGA) was developed to solve this problem by exploring the problem search space through its genetic operators and fitness function to determine the SaaS scaling plan. The HGA also uses the problem's domain knowledge to ensure that the solutions meet the problem's constraints and achieve its objectives. The experimental results demonstrated that the HGA constantly outperform a heuristic algorithm by achieving a low-cost scaling and placement plan. This research has identified three significant new problems for composite SaaS in Cloud. Various types of evolutionary algorithms have also been developed in addressing the problems where these contribute to the evolutionary computation field. The algorithms provide solutions for efficient resource management of composite SaaS in Cloud that resulted to a low total cost of ownership for users while guaranteeing the SaaS performance.
Electrochemical fabrication of metallic nanostructured electrodes for electroanalytical applications
Resumo:
The use of electrodeposited metal-based nanostructures for electroanalytical applications has recently received widespread attention. There are several approaches to creating nanostructured materials through electrochemical routes that include facile electrodeposition at either untreated or modified electrodes, or through the use of physical or chemical templating methods. This allows the shape, size and composition of the nanomaterial to be readily tuned for the application of interest. The use of such materials is particularly suited to electroanalytical applications. In this mini-review an overview of recently developed nanostructured materials developed through electrochemical routes is presented as well as their electroanalytical applications in areas of biological and environmental importance.
Resumo:
In popular contemporary use, the French term bricolage refers to the activities of the home handyman. It is sometimes used in a disparaging way to refer to work that is improvised, uninformed by expertise or specialist knowledge, and probably inferior in its results when compared with the work of a tradesman or professional. In 1962, anthropologist and philosopher Claude Lévi-Strauss argued that bricolage is a modality of human thought. Since then, the importance of bricolage as a mental activity has been identified in relation to art and architecture, as well as other fields of cultural activity. In this paper I consider bricolage as an activity of the ego and explore its role in the consulting room. I argue that by necessity the psychoanalytic work undertaken between patient and analyst relies on this modality of thought and, furthermore, that the use of bricolage is entirely compatible with evidence-based practice.
Resumo:
The deposition of biological material (biofouling) onto polymeric contact lenses is thought to be a major contributor to lens discomfort and hence discontinuation of wear. We describe a method to characterize lipid deposits directly from worn contact lenses utilizing liquid extraction surface analysis coupled to tandem mass spectrometry (LESA-MS/MS). This technique effected facile and reproducible extraction of lipids from the contact lens surfaces and identified lipid molecular species representing all major classes present in human tear film. Our data show that LESA-MS/MS is a rapid and comprehensive technique for the characterization of lipid-related biofouling on polymer surfaces.
Resumo:
The complete structural elucidation of complex lipids, including glycerophospholipids, using only mass spectrometry represents a major challenge to contemporary analytical technologies. Here, we demonstrate that product ions arising from the collision-induced dissociation (CID) of the [M + Na] + adduct ions of phospholipids can be isolated and subjected to subsequent gas-phase ozonolysis-known as ozone-induced dissociation (OzID)-in a linear ion-trap mass spectrometer. The resulting CID/OzID experiment yields abundant product ions that are characteristic of the acyl substitution on the glycerol backbone (i.e., sn-position). This approach is shown to differentiate sn-positional isomers, such as the regioisomeric phosphatidylcholine pair of PC 16:0/18:1 and PC 18:1/16:0. Importantly, CID/OzID provides a sensitive diagnostic for the existence of an isomeric mixture in a given sample. This is of very high value for the analysis of tissue extracts since CID/OzID analyses can reveal changes in the relative abundance of isomeric constituents even within different tissues from the same animal. Finally, we demonstrate the ability to assign carbon-carbon double bond positions to individual acyl chains at specific backbone positions by adding subsequent CID and/or OzID steps to the workflow and that this can be achieved in a single step using a hybrid triple quadrupole-linear ion trap mass spectrometer. This unique approach represents the most complete and specific structural analysis of lipids by mass spectrometry demonstrated to date and is a significant step towards comprehensive top-down lipidomics. This journal is © The Royal Society of Chemistry 2014. Grant Number ARC/DP0986628, ARC/FT110100249, ARC/LP110200648