922 resultados para value systems alignment
Resumo:
Wave energy converters are currently proposed to be deployed near coastal area for the closeness to the infrastructure and for ease of maintenance in order to reduce operational costs. The motivation behind this work is the fact that the deployment depths during the highest and lowest tides will have a significant effect on the mooring system of WECs. In this paper, the issue will be investigated by numerical modelling (using ANSYS AQWA) for both catenary and taut moorings to examine the performance of the mooring system in varying tides. The case study being considered is the ¼- scale wave energy test site in Galway Bay off the west coast of Ireland where some marine renewable energy devices can be tested. In this test site, the tidal range is macro-tidal with a range of approximately 6 m which is a large value relative to the water depth. In the numerical analysis, ANSYS AQWA suite has been used to simulate moored devices under wave excitation at varying tidal ranges. Results show that the highest tide will give rise to larger forces. While at lower depths, slackening of the mooring occurs. Therefore, the mooring lines must be designed to accommodate both situations.
Resumo:
The extractive industry is characterized by high levels of risk and uncertainty. These attributes create challenges when applying traditional accounting concepts (such as the revenue recognition and matching concepts) to the preparation of financial statements in the industry. The International Accounting Standards Board (2010) states that the objective of general purpose financial statements is to provide useful financial information to assist the capital allocation decisions of existing and potential providers of capital. The usefulness of information is defined as being relevant and faithfully represented so as to best aid in the investment decisions of capital providers. Value relevance research utilizes adaptations of the Ohlson (1995) to assess the attribute of value relevance which is one part of the attributes resulting in useful information. This study firstly examines the value relevance of the financial information disclosed in the financial reports of extractive firms. The findings reveal that the value relevance of information disclosed in the financial reports depends on the circumstances of the firm including sector, size and profitability. Traditional accounting concepts such as the matching concept can be ineffective when applied to small firms who are primarily engaged in nonproduction activities that involve significant levels of uncertainty such as exploration activities or the development of sites. Standard setting bodies such as the International Accounting Standards Board and the Financial Accounting Standards Board have addressed the financial reporting challenges in the extractive industry by allowing a significant amount of accounting flexibility in industryspecific accounting standards, particularly in relation to the accounting treatment of exploration and evaluation expenditure. Therefore, secondly this study examines whether the choice of exploration accounting policy has an effect on the value relevance of information disclosed in the financial reports. The findings show that, in general, the Successful Efforts method produces value relevant information in the financial reports of profitable extractive firms. However, specifically in the oil & gas sector, the Full Cost method produces value relevant asset disclosures if the firm is lossmaking. This indicates that investors in production and non-production orientated firms have different information needs and these needs cannot be simultaneously fulfilled by a single accounting policy. In the mining sector, a preference by large profitable mining companies towards a more conservative policy than either the Full Cost or Successful Efforts methods does not result in more value relevant information being disclosed in the financial reports. This finding supports the fact that the qualitative characteristic of prudence is a form of bias which has a downward effect on asset values. The third aspect of this study is an examination of the effect of corporate governance on the value relevance of disclosures made in the financial reports of extractive firms. The findings show that the key factor influencing the value relevance of financial information is the ability of the directors to select accounting policies which reflect the economic substance of the particular circumstances facing the firms in an effective way. Corporate governance is found to have an effect on value relevance, particularly in the oil & gas sector. However, there is no significant difference between the exploration accounting policy choices made by directors of firms with good systems of corporate governance and those with weak systems of corporate governance.
Resumo:
Waterways have many more ties with society than as a medium for the transportation of goods alone. Waterway systems offer society many kinds of socio-economic value. Waterway authorities responsible for management and (re)development need to optimize the public benefits for the investments made. However, due to the many trade-offs in the system these agencies have multiple options for achieving this goal. Because they can invest resources in a great many different ways, they need a way to calculate the efficiency of the decisions they make. Transaction cost theory, and the analysis that goes with it, has emerged as an important means of justifying efficiency decisions in the economic arena. To improve our understanding of the value-creating and coordination problems for waterway authorities, such a framework is applied to this sector. This paper describes the findings for two cases, which reflect two common multi trade-off situations for waterway (re)development. Our first case study focuses on the Miami River, an urban revitalized waterway. The second case describes the Inner Harbour Navigation Canal in New Orleans, a canal and lock in an industrialized zone, in need of an upgrade to keep pace with market developments. The transaction cost framework appears to be useful in exposing a wide variety of value-creating opportunities and the resistances that come with it. These insights can offer infrastructure managers guidance on how to seize these opportunities.
Resumo:
The richness of dance comes from the need to work with an individual body. Still, the body of the dancer belongs to plural context, crossed by artistic and social traditions, which locate the artists in a given field. We claim that role conflict is an essential component of the structure of collective artistic creativity. We address the production of discourse in a British dance company, with data that spawns from the ethnography ‘Dance and Cognition’, directed by David Kirsh at the University of California, together with WayneMcGregor-Random Dance. Our Critical Discourse Analysis is based on multiple interviews to the dancers and choreographer. Our findings show how creativity in dance seems to be empirically observable, and thus embodied and distributed shaped by the dance habitus of the particular social context.
Resumo:
Abstract Purpose – The purpose of this paper is to present a case study regarding the deployment of a previously developed model for the integration of management systems (MSs). The case study is developed at a manufacturing site of an international enterprise. The implementation of this model in a real business environment is aimed at assessing its feasibility. Design/methodology/approach – The presented case study takes into account different management systems standards (MSSs) progressively implemented, along the years, independently. The implementation of the model was supported by the results obtained from an investigation performed according to a structured diagnosis that was conducted to collect information related to the organizational situation of the enterprise. Findings – The main findings are as follows: a robust integrated management system (IMS), objectively more lean, structured and manageable was found to be feasible; this study provided an holistic view of the enterprise’s global management; clarifications of job descriptions and boundaries of action and responsibilities were achieved; greater efficiency in the use of resources was attained; more coordinated management of the three pillars of sustainability – environmental, economic and social, as well as risks, providing confidence and added value to the company and interested parties was achieved. Originality/value – This case study is pioneering in Portugal in respect to the implementation, at the level of an industrial organization, of the model previously developed for the integration of individualized MSs. The case study provides new insights regarding the implementation of IMSs including the rationalization of several resources and elimination of several types of organizational waste leveraging gains of efficiency. Due to its intrinsic characteristics, the model is able to support, progressively, new or revised MSSs according to the principles of annex SL (normative) – proposals for MSSs – of the International Organization for Standardization and the International Electrotechnical Commission, that the industrial organization can adopt beyond the current ones.
Resumo:
Abstract : Wastepaper sludge ash (WSA) is generated by a cogeneration station by burning wastepaper sludge. It mainly consists of amorphous aluminosilicate phase, anhydrite, gehlenite, calcite, lime, C2S, C3A, quartz, anorthite, traces of mayenite. Because of its free lime content (~10%), WSA suspension has a high pH (13). Previous researchers have found that the WSA composition has poor robustness and the variations lead to some unsoundness for Portland cement (PC) blended WSA concrete. This thesis focused on the use of WSA in different types of concrete mixes to avoid the deleterious effect of the expansion due to the WSA hydration. As a result, WSA were used in making alkali-activated materials (AAMs) as a precursor source and as a potential activator in consideration of its amorphous content and the high alkaline nature. Moreover, the autogenous shrinkage behavior of PC concrete at low w/b ratio was used in order to compensate the expansion effect due to WSA. The concrete properties as well as the volume change were investigated for the modified WSA blended concrete. The reaction mechanism and microstructure of newly formed binder were evaluated by X-ray diffraction (XRD), calorimetry, thermogravimetric analysis (TGA), scanning electron microscopy (SEM) and energy dispersive X-ray spectroscopy (EDX). When WSA was used as precursor, the results showed incompatible reaction between WSA and alkaline solution. The mixtures were not workable and provided very low compressive strength no matter what kinds of chemical activators were used. This was due to the metallic aluminum in WSA, which releases abundant hydrogen gas when WSA reacts with strong alkaline solution. Besides, the results of this thesis showed that WSA can activate the glassy phase contained in slag, glass powder (GP) and class F fly ash (FFA) with an optimum blended ratio of 50:50. The WSA/slag (mass ratio of 50:50) mortar (w/b of 0.47) attained 46 MPa at 28 days without heat curing assistance. A significant fast setting was noticed for the WSA-activated binder due to the C3A phase, free lime and metallic aluminum contained in the WSA. Adding 5% of gypsum can delay the fast setting, but this greatly increased the potential risk of intern sulfate attack. The XRD, TGA and calorimetry analyses demonstrated the formation of ettringite, C-S-H, portlandite, hydrogarnet and calcium carboaluminate in the hydrated binder. The mechanical performance of different binder was closely related to the microstructure of corresponding binder which was proved by the SEM observation. The hydrated WSA/slag and WSA/FFA binder formed a C-A-S-H type of gel with lower Ca/Si ratio (0.47~1.6). A hybrid gel (i.e. C-N-A-S-H) was observed for the WSA/GP binder with a very low Ca/Si ratio (0.26) and Na/Si ratio (0.03). The SEM/EDX analyses displayed the formation of expansive gel (ettringite and thaumasite) in the gypsum added WSA/slag concrete. The gradual emission of hydrogen gas due to the reaction of WSA with alkaline environment significantly increased the porosity and degraded the microstructure of hydrated matrix after the setting. In the last phase of this research WSA-PC blended binder was tailored to form a high autogenous shrinkage concrete in order to compensate the initial expansion. Different binders were proportioned with PC, WSA, silica fume or slag. The microstructure and mechanical properties of concrete can be improved by decreasing w/b ratios and by incorporating silica fume or slag. The 28-day compressive strength of WSA-blended concrete was above 22 MPa and reached 45 MPa when silica fume was added. The PC concrete incorporating silica fume or slag tended to develop higher autogenous shrinkage at low w/b ratios, and thus the ternary binder with the addition of WSA inhibited the long term shrinkage due to the initial expansion property to WSA. In the restrained shrinkage test, the concrete ring incorporating the ternary binder (PC/WSA/slag) revealed negligible potential to cracking up to 96 days as a result of the offset effect by WSA expansion. The WSA blended regular concrete could be produced for potential applications with reduced expansion, good mechanical property and lower permeability.
Resumo:
One of the main features of the Greek currency are the big differences between emissions of the polis, which did not match either in their iconographic message types, not even in the met-rical pattern of their values. These differences were reflected in exchange systems ruled by the main sanctuaries that shrines stipu-lated thus giving official status to change.
Resumo:
Abstract : Images acquired from unmanned aerial vehicles (UAVs) can provide data with unprecedented spatial and temporal resolution for three-dimensional (3D) modeling. Solutions developed for this purpose are mainly operating based on photogrammetry concepts, namely UAV-Photogrammetry Systems (UAV-PS). Such systems are used in applications where both geospatial and visual information of the environment is required. These applications include, but are not limited to, natural resource management such as precision agriculture, military and police-related services such as traffic-law enforcement, precision engineering such as infrastructure inspection, and health services such as epidemic emergency management. UAV-photogrammetry systems can be differentiated based on their spatial characteristics in terms of accuracy and resolution. That is some applications, such as precision engineering, require high-resolution and high-accuracy information of the environment (e.g. 3D modeling with less than one centimeter accuracy and resolution). In other applications, lower levels of accuracy might be sufficient, (e.g. wildlife management needing few decimeters of resolution). However, even in those applications, the specific characteristics of UAV-PSs should be well considered in the steps of both system development and application in order to yield satisfying results. In this regard, this thesis presents a comprehensive review of the applications of unmanned aerial imagery, where the objective was to determine the challenges that remote-sensing applications of UAV systems currently face. This review also allowed recognizing the specific characteristics and requirements of UAV-PSs, which are mostly ignored or not thoroughly assessed in recent studies. Accordingly, the focus of the first part of this thesis is on exploring the methodological and experimental aspects of implementing a UAV-PS. The developed system was extensively evaluated for precise modeling of an open-pit gravel mine and performing volumetric-change measurements. This application was selected for two main reasons. Firstly, this case study provided a challenging environment for 3D modeling, in terms of scale changes, terrain relief variations as well as structure and texture diversities. Secondly, open-pit-mine monitoring demands high levels of accuracy, which justifies our efforts to improve the developed UAV-PS to its maximum capacities. The hardware of the system consisted of an electric-powered helicopter, a high-resolution digital camera, and an inertial navigation system. The software of the system included the in-house programs specifically designed for camera calibration, platform calibration, system integration, onboard data acquisition, flight planning and ground control point (GCP) detection. The detailed features of the system are discussed in the thesis, and solutions are proposed in order to enhance the system and its photogrammetric outputs. The accuracy of the results was evaluated under various mapping conditions, including direct georeferencing and indirect georeferencing with different numbers, distributions and types of ground control points. Additionally, the effects of imaging configuration and network stability on modeling accuracy were assessed. The second part of this thesis concentrates on improving the techniques of sparse and dense reconstruction. The proposed solutions are alternatives to traditional aerial photogrammetry techniques, properly adapted to specific characteristics of unmanned, low-altitude imagery. Firstly, a method was developed for robust sparse matching and epipolar-geometry estimation. The main achievement of this method was its capacity to handle a very high percentage of outliers (errors among corresponding points) with remarkable computational efficiency (compared to the state-of-the-art techniques). Secondly, a block bundle adjustment (BBA) strategy was proposed based on the integration of intrinsic camera calibration parameters as pseudo-observations to Gauss-Helmert model. The principal advantage of this strategy was controlling the adverse effect of unstable imaging networks and noisy image observations on the accuracy of self-calibration. The sparse implementation of this strategy was also performed, which allowed its application to data sets containing a lot of tie points. Finally, the concepts of intrinsic curves were revisited for dense stereo matching. The proposed technique could achieve a high level of accuracy and efficiency by searching only through a small fraction of the whole disparity search space as well as internally handling occlusions and matching ambiguities. These photogrammetric solutions were extensively tested using synthetic data, close-range images and the images acquired from the gravel-pit mine. Achieving absolute 3D mapping accuracy of 11±7 mm illustrated the success of this system for high-precision modeling of the environment.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-07
Resumo:
Unstructured mesh codes for modelling continuum physics phenomena have evolved to provide the facility to model complex interacting systems. Parallelisation of such codes using single Program Multi Data (SPMD) domain decomposition techniques implemented with message passing has been demonstrated to provide high parallel efficiency, scalability to large numbers of processors P and portability across a wide range of parallel platforms. High efficiency, especially for large P requires that load balance is achieved in each parallel loop. For a code in which loops span a variety of mesh entity types, for example, elements, faces and vertices, some compromise is required between load balance for each entity type and the quantity of inter-processor communication required to satisfy data dependence between processors.
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.
Resumo:
Incidental findings on low-dose CT images obtained during hybrid imaging are an increasing phenomenon as CT technology advances. Understanding the diagnostic value of incidental findings along with the technical limitations is important when reporting image results and recommending follow-up, which may result in an additional radiation dose from further diagnostic imaging and an increase in patient anxiety. This study assessed lesions incidentally detected on CT images acquired for attenuation correction on two SPECT/CT systems. Methods: An anthropomorphic chest phantom containing simulated lesions of varying size and density was imaged on an Infinia Hawkeye 4 and a Symbia T6 using the low-dose CT settings applied for attenuation correction acquisitions in myocardial perfusion imaging. Twenty-two interpreters assessed 46 images from each SPECT/CT system (15 normal images and 31 abnormal images; 41 lesions). Data were evaluated using a jackknife alternative free-response receiver-operating-characteristic analysis (JAFROC). Results: JAFROC analysis showed a significant difference (P < 0.0001) in lesion detection, with the figures of merit being 0.599 (95% confidence interval, 0.568, 0.631) and 0.810 (95% confidence interval, 0.781, 0.839) for the Infinia Hawkeye 4 and Symbia T6, respectively. Lesion detection on the Infinia Hawkeye 4 was generally limited to larger, higher-density lesions. The Symbia T6 allowed improved detection rates for midsized lesions and some lower-density lesions. However, interpreters struggled to detect small (5 mm) lesions on both image sets, irrespective of density. Conclusion: Lesion detection is more reliable on low-dose CT images from the Symbia T6 than from the Infinia Hawkeye 4. This phantom-based study gives an indication of potential lesion detection in the clinical context as shown by two commonly used SPECT/CT systems, which may assist the clinician in determining whether further diagnostic imaging is justified.
Resumo:
Abstract. Two ideas taken from Bayesian optimization and classifier systems are presented for personnel scheduling based on choosing a suitable scheduling rule from a set for each person's assignment. Unlike our previous work of using genetic algorithms whose learning is implicit, the learning in both approaches is explicit, i.e. we are able to identify building blocks directly. To achieve this target, the Bayesian optimization algorithm builds a Bayesian network of the joint probability distribution of the rules used to construct solutions, while the adapted classifier system assigns each rule a strength value that is constantly updated according to its usefulness in the current situation. Computational results from 52 real data instances of nurse scheduling demonstrate the success of both approaches. It is also suggested that the learning mechanism in the proposed approaches might be suitable for other scheduling problems.