888 resultados para supervisory control and data acquisition
Resumo:
Purpose – The purpose of this paper is to consider hierarchical control as a mode of governance, and analyses the extent of control exhibited by central government over local government through the best value (BV) and comprehensive performance assessment (CPA) performance regimes. Design/methodology/approach – This paper utilises Ouchi's framework and, specifically, his articulation of bureaucratic or hierarchical control in the move towards achievement of organisational objectives. Hierarchical control may be inferred from the extent of “command and control” by Central Government, use of rewards and sanctions, and alignment to government priorities and discrimination of performance. Findings – CPA represents a more sophisticated performance regime than BV in the governance of local authorities by central government. In comparison to BV, CPA involved less scope for dialogue with local government prior to introduction, closer inspection of and direction of support toward poorer performing authorities, and more alignment to government priorities in the weightings attached to service blocks. Originality/value - The paper focuses upon the hierarchic/bureaucratic mode of governance as articulated by Ouchi and expands on this mode in order to analyse shifts in performance regimes in the public sector.
Resumo:
This book is very practical in its international usefulness (because current risk practice and understanding is not equal across international boundaries). For example, an accountant in Belgium would want to know what the governance regulations are in that country and what the risk issues are that he/she needs to be aware of. This book covers the international aspect of risk management systems, risk and governance, and risk and accounting. In doing so the book covers topics such as: internal control and corporate governance; risk management systems; integrating risk into performance management systems; risk and audit; governance structures; risk management of pensions; pension scheme risks e.g. hedging derivatives, longevity bonds etc; risk reporting; and the role of the accountant in risk management. There are the case studies through out the book which illustrate by way of concrete practical examples the major themes contained in the book. The book includes highly topical areas such as the Sarbanes Oxley Act and pension risk management.
Resumo:
This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.
Resumo:
BOOK REVIEWS Multibody System Mechanics: Modelling, Stability, Control, and Ro- bustness, by V. A. Konoplev and A. Cheremensky, Mathematics and its Appli- cations Vol. 1, Union of Bulgarian Mathematicians, Sofia, 2001, XXII + 288 pp., $ 65.00, ISBN 954-8880-09-01
Resumo:
* The work is partially supported by Grant no. NIP917 of the Ministry of Science and Education – Republic of Bulgaria.
Resumo:
Problems for intellectualisation for man-machine interface and methods of self-organization for network control in multi-agent infotelecommunication systems have been discussed. Architecture and principles for construction of network and neural agents for telecommunication systems of new generation have been suggested. Methods for adaptive and multi-agent routing for information flows by requests of external agents- users of global telecommunication systems and computer networks have been described.
Resumo:
Renewable energy forms have been widely used in the past decades highlighting a "green" shift in energy production. An actual reason behind this turn to renewable energy production is EU directives which set the Union's targets for energy production from renewable sources, greenhouse gas emissions and increase in energy efficiency. All member countries are obligated to apply harmonized legislation and practices and restructure their energy production networks in order to meet EU targets. Towards the fulfillment of 20-20-20 EU targets, in Greece a specific strategy which promotes the construction of large scale Renewable Energy Source plants is promoted. In this paper, we present an optimal design of the Greek renewable energy production network applying a 0-1 Weighted Goal Programming model, considering social, environmental and economic criteria. In the absence of a panel of experts Data Envelopment Analysis (DEA) approach is used in order to filter the best out of the possible network structures, seeking for the maximum technical efficiency. Super-Efficiency DEA model is also used in order to reduce the solutions and find the best out of all the possible. The results showed that in order to achieve maximum efficiency, the social and environmental criteria must be weighted more than the economic ones.
Resumo:
Accurate measurement of intervertebral kinematics of the cervical spine can support the diagnosis of widespread diseases related to neck pain, such as chronic whiplash dysfunction, arthritis, and segmental degeneration. The natural inaccessibility of the spine, its complex anatomy, and the small range of motion only permit concise measurement in vivo. Low dose X-ray fluoroscopy allows time-continuous screening of cervical spine during patient's spontaneous motion. To obtain accurate motion measurements, each vertebra was tracked by means of image processing along a sequence of radiographic images. To obtain a time-continuous representation of motion and to reduce noise in the experimental data, smoothing spline interpolation was used. Estimation of intervertebral motion for cervical segments was obtained by processing patient's fluoroscopic sequence; intervertebral angle and displacement and the instantaneous centre of rotation were computed. The RMS value of fitting errors resulted in about 0.2 degree for rotation and 0.2 mm for displacements. © 2013 Paolo Bifulco et al.
Resumo:
We present novel Terahertz (THz) emitting optically pumped Quantum Dot (QD) photoconductive (PC) materials and antenna structures on their basis both for pulsed and CW pumping regimes. Full text Quantum dot and microantenna design - Presented here are design considerations for the semiconductor materials in our novel QD-based photoconductive antenna (PCA) structures, metallic microantenna designs, and their implementation as part of a complete THz source or transceiver system. Layers of implanted QDs can be used for the photocarrier lifetime shortening mechanism[1,2]. In our research we use InAs:GaAs QD structures of varying dot layer number and distributed Bragg reflector(DBR)reflectivity range. According to the observed dependence of carrier lifetimes on QD layer periodicity [3], it is reasonable to assume that electron lifetimes can be potentially reduced down to 0.45ps in such structures. Both of these features; long excitation wavelength and short carriers lifetime predict possible feasibility of QD antennas for THz generation and detection. In general, relatively simple antenna configurations were used here, including: coplanar stripline (CPS); Hertzian-type dipoles; bow-ties for broadband and log-spiral(LS)or log-periodic(LP)‘toothed’ geometriesfor a CW operation regime. Experimental results - Several lasers are used for antenna pumping: Ti:Sapphire femtosecond laser, as well as single-[4], double-[5] wavelength, and pulsed [6] QD lasers. For detection of the THz signal different schemes and devices were used, e.g. helium-cooled bolometer, Golay cell and a second PCA for coherent THz detection in a traditional time-domain measurement scheme.Fig.1shows the typical THz output power trend from a 5 um-gap LPQD PCA pumped using a tunable QD LD with optical pump spectrum shown in (b). Summary - QD-based THz systems have been demonstrated as a feasible and highly versatile solution. The implementation of QD LDs as pump sources could be a major step towards ultra-compact, electrically controllable transceiver system that would increase the scope of data analysis due to the high pulse repetition rates of such LDs [3], allowing real-time THz TDS and data acquisition. Future steps in development of such systems now lie in the further investigation of QD-based THz PCA structures and devices, particularly with regards to their compatibilitywith QD LDs as pump sources. [1]E. U. Rafailov et al., “Fast quantum-dot saturable absorber for passive mode-locking of solid-State lasers,”Photon.Tech.Lett., IEEE, vol. 16 pp. 2439-2441(2004) [2]E. Estacio, “Strong enhancement of terahertz emission from GaAs in InAs/GaAs quantum dot structures. Appl.Phys.Lett., vol. 94 pp. 232104 (2009) [3]C. Kadow et al., “Self-assembled ErAs islands in GaAs: Growth and subpicosecond carrier dynamics,” Appl. Phys. Lett., vol. 75 pp. 3548-3550 (1999) [4]T. Kruczek, R. Leyman, D. Carnegie, N. Bazieva, G. Erbert, S. Schulz, C. Reardon, and E. U. Rafailov, “Continuous wave terahertz radiation from an InAs/GaAs quantum-dot photomixer device,” Appl. Phys. Lett., vol. 101(2012) [5]R. Leyman, D. I. Nikitichev, N. Bazieva, and E. U. Rafailov, “Multimodal spectral control of a quantum-dot diode laser for THz difference frequency generation,” Appl. Phys. Lett., vol. 99 (2011) [6]K.G. Wilcox, M. Butkus, I. Farrer, D.A. Ritchie, A. Tropper, E.U. Rafailov, “Subpicosecond quantum dot saturable absorber mode-locked semiconductor disk laser, ” Appl. Phys. Lett. Vol 94, 2511 © 2014 IEEE.
Resumo:
The paper presents in brief the “2nd Generation Open Access Infrastructure for Research in Europe” project (http://www.openaire.eu/) and what is done in Bulgaria during the last year in the area of open access to scientific information and data.
Resumo:
This paper presents the results of our data mining study of Pb-Zn (lead-zinc) ore assay records from a mine enterprise in Bulgaria. We examined the dataset, cleaned outliers, visualized the data, and created dataset statistics. A Pb-Zn cluster data mining model was created for segmentation and prediction of Pb-Zn ore assay data. The Pb-Zn cluster data model consists of five clusters and DMX queries. We analyzed the Pb-Zn cluster content, size, structure, and characteristics. The set of the DMX queries allows for browsing and managing the clusters, as well as predicting ore assay records. A testing and validation of the Pb-Zn cluster data mining model was developed in order to show its reasonable accuracy before beingused in a production environment. The Pb-Zn cluster data mining model can be used for changes of the mine grinding and floatation processing parameters in almost real-time, which is important for the efficiency of the Pb-Zn ore beneficiation process. ACM Computing Classification System (1998): H.2.8, H.3.3.
Resumo:
This work was supported in part by the EU „2nd Generation Open Access Infrastructure for Research in Europe" (OpenAIRE+). The autumn training school Development and Promotion of Open Access to Scientific Information and Research is organized in the frame of the Fourth International Conference on Digital Presentation and Preservation of Cultural and Scientific Heritage—DiPP2014 (September 18–21, 2014, Veliko Tarnovo, Bulgaria, http://dipp2014.math.bas.bg/), organized under the UNESCO patronage. The main organiser is the Institute of Mathematics and Informatics, Bulgarian Academy of Sciences with the support of EU project FOSTER (http://www.fosteropenscience.eu/) and the P. R. Slaveykov Regional Public Library in Veliko Tarnovo, Bulgaria.
Resumo:
Big data comes in various ways, types, shapes, forms and sizes. Indeed, almost all areas of science, technology, medicine, public health, economics, business, linguistics and social science are bombarded by ever increasing flows of data begging to be analyzed efficiently and effectively. In this paper, we propose a rough idea of a possible taxonomy of big data, along with some of the most commonly used tools for handling each particular category of bigness. The dimensionality p of the input space and the sample size n are usually the main ingredients in the characterization of data bigness. The specific statistical machine learning technique used to handle a particular big data set will depend on which category it falls in within the bigness taxonomy. Large p small n data sets for instance require a different set of tools from the large n small p variety. Among other tools, we discuss Preprocessing, Standardization, Imputation, Projection, Regularization, Penalization, Compression, Reduction, Selection, Kernelization, Hybridization, Parallelization, Aggregation, Randomization, Replication, Sequentialization. Indeed, it is important to emphasize right away that the so-called no free lunch theorem applies here, in the sense that there is no universally superior method that outperforms all other methods on all categories of bigness. It is also important to stress the fact that simplicity in the sense of Ockham’s razor non-plurality principle of parsimony tends to reign supreme when it comes to massive data. We conclude with a comparison of the predictive performance of some of the most commonly used methods on a few data sets.
Resumo:
For wireless power transfer (WPT) systems, communication between the primary side and the pickup side is a challenge because of the large air gap and magnetic interferences. A novel method, which integrates bidirectional data communication into a high-power WPT system, is proposed in this paper. The power and data transfer share the same inductive link between coreless coils. Power/data frequency division multiplexing technique is applied, and the power and data are transmitted by employing different frequency carriers and controlled independently. The circuit model of the multiband system is provided to analyze the transmission gain of the communication channel, as well as the power delivery performance. The crosstalk interference between two carriers is discussed. In addition, the signal-to-noise ratios of the channels are also estimated, which gives a guideline for the design of mod/demod circuits. Finally, a 500-W WPT prototype has been built to demonstrate the effectiveness of the proposed WPT system.
Resumo:
Research Question/Issue - Which forms of state control over corporations have emerged in countries that made a transition from centrally-planned to marked-based economies and what are their implications for corporate governance? We assess the literature on variation and evolution of state control in transition economies, focusing on corporate governance of state-controlled firms. We highlight emerging trends and identify future research avenues. Research Findings/Insights - Based on our analysis of more than 100 articles in leading management, finance, and economics journals since 1989, we demonstrate how research on state control evolved from a polarized approach of public–private equity ownership comparison to studying a variety of constellations of state capitalism. Theoretical/Academic Implications - We identify theoretical perspectives that help us better understand benefits and costs associated with various forms of state control over firms. We encourage future studies to examine how context-specific factors determine the effect of state control on corporate governance. Practitioner/Policy Implications - Investors and policymakers should consider under which conditions investing in state-affiliated firms generates superior returns.