915 resultados para Dynamic Headspace Analysis
Resumo:
This paper measures the connectedness in EMU sovereign market volatility between April 1999 and January 2014, in order to monitor stress transmission and to identify episodes of intensive spillovers from one country to the others. To this end, we first perform a static and dynamic analysis to measure the total volatility connectedness in the entire period (the system-wide approach) using a framework recently proposed by Diebold and Yılmaz (2014). Second, we make use of a dynamic analysis to evaluate the net directional connectedness for each country and apply panel model techniques to investigate its determinants. Finally, to gain further insights, we examine the timevarying behaviour of net pair-wise directional connectedness at different stages of the recent sovereign debt crisis.
Resumo:
As a result of the growing interest in studying employee well-being as a complex process that portrays high levels of within-individual variability and evolves over time, this present study considers the experience of flow in the workplace from a nonlinear dynamical systems approach. Our goal is to offer new ways to move the study of employee well-being beyond linear approaches. With nonlinear dynamical systems theory as the backdrop, we conducted a longitudinal study using the experience sampling method and qualitative semi-structured interviews for data collection; 6981 registers of data were collected from a sample of 60 employees. The obtained time series were analyzed using various techniques derived from the nonlinear dynamical systems theory (i.e., recurrence analysis and surrogate data) and multiple correspondence analyses. The results revealed the following: 1) flow in the workplace presents a high degree of within-individual variability; this variability is characterized as chaotic for most of the cases (75%); 2) high levels of flow are associated with chaos; and 3) different dimensions of the flow experience (e.g., merging of action and awareness) as well as individual (e.g., age) and job characteristics (e.g., job tenure) are associated with the emergence of different dynamic patterns (chaotic, linear and random).
Resumo:
The objective of this paper is to examine whether informal labor markets affect the flows of Foreign Direct Investment (FDI), and also whether this effect is similar in developed and developing countries. With this aim, different public data sources, such as the World Bank (WB), and the United Nations Conference on Trade and Development (UNCTAD) are used, and panel econometric models are estimated for a sample of 65 countries over a 14 year period (1996-2009). In addition, this paper uses a dynamic model as an extension of the analysis to establish whether such an effect exists and what its indicators and significance may be.
Resumo:
The aim of this study was to validate a method for the determination of acethaldehyde, methanol, ethanol, acetone and isopropanol employing solid-phase microextraction associated to gas chromatography with flame ionization detection. The operational conditions of SPME were optimized by response surface analysis. The calibration curves for all compounds were linear with r² > 0.9973. Accuracy (89.1-109.0%), intra-assay precision (1.8-8.5%) and inter-assay precision (2.2-8.2%) were acceptable. The quantification limit was 50 µg/mL. The method was applied to the meaurement of ethanol in blood and oral fluid of a group of volunteers. Oral fluid ethanol concentrations were not directly correlated with blood concentrations.
Resumo:
This paper reports on the identification of volatile and semi-volatile compounds and a comparison of the chromatographic profiles obtained by Headspace Solid-Phase Microextraction/Gas Chromatography with Mass Spectrometry detection (HS-SPME-GC-MS) of dried leaves of Mikania glomerata Sprengel (Asteraceae), also known as 'guaco.' Three different types of commercial SPME fibers were tested: polydimethylsiloxane (PDMS), polydimethylsiloxane/divinylbenzene (PDMS/DVB) and polyacrylate (PA). Fifty-nine compounds were fully identified by HS-SPME-HRGC-MS, including coumarin, a marker for the quality control of guaco-based phytomedicines; most of the other identified compounds were mono- and sesquiterpenes. PA fibers performed better in the analysis of coumarin, while PDMS-DVB proved to be the best choice for a general and non-selective analysis of volatile and semi-volatile guaco-based compounds. The SPME method is faster and requires a smaller sample than conventional hydrodistillation of essential oils, providing a general overview of the volatile and semi-volatile compounds of M. glomerata.
Resumo:
This study presents an automatic, computer-aided analytical method called Comparison Structure Analysis (CSA), which can be applied to different dimensions of music. The aim of CSA is first and foremost practical: to produce dynamic and understandable representations of musical properties by evaluating the prevalence of a chosen musical data structure through a musical piece. Such a comparison structure may refer to a mathematical vector, a set, a matrix or another type of data structure and even a combination of data structures. CSA depends on an abstract systematic segmentation that allows for a statistical or mathematical survey of the data. To choose a comparison structure is to tune the apparatus to be sensitive to an exclusive set of musical properties. CSA settles somewhere between traditional music analysis and computer aided music information retrieval (MIR). Theoretically defined musical entities, such as pitch-class sets, set-classes and particular rhythm patterns are detected in compositions using pattern extraction and pattern comparison algorithms that are typical within the field of MIR. In principle, the idea of comparison structure analysis can be applied to any time-series type data and, in the music analytical context, to polyphonic as well as homophonic music. Tonal trends, set-class similarities, invertible counterpoints, voice-leading similarities, short-term modulations, rhythmic similarities and multiparametric changes in musical texture were studied. Since CSA allows for a highly accurate classification of compositions, its methods may be applicable to symbolic music information retrieval as well. The strength of CSA relies especially on the possibility to make comparisons between the observations concerning different musical parameters and to combine it with statistical and perhaps other music analytical methods. The results of CSA are dependent on the competence of the similarity measure. New similarity measures for tonal stability, rhythmic and set-class similarity measurements were proposed. The most advanced results were attained by employing the automated function generation – comparable with the so-called genetic programming – to search for an optimal model for set-class similarity measurements. However, the results of CSA seem to agree strongly, independent of the type of similarity function employed in the analysis.
Resumo:
Thermal stability and thermal decomposition of succinic acid, sodium succinate and its compounds with Mn(II), Fe(II), Co(II), Ni(II), Cu(II) and Zn(II) were investigated employing simultaneous thermogravimetry and differential thermal analysis (TG-DTA) in nitrogen and carbon dioxide atmospheres and TG-FTIR in nitrogen atmosphere. On heating, in both atmospheres the succinic acid melt and evaporate, while for the sodium succinate the thermal decomposition occurs with the formation of sodium carbonate. For the transition metal succinates the final residue up to 1180 ºC in N2 atmosphere was a mixture of metal and metal oxide in no simple stoichiometric relation, except for Zn compound, where the residue was a small quantity of carbonaceous residue. For the CO2 atmosphere the final residue up to 980 ºC was: MnO, Fe3O4, CoO, ZnO and mixtures of Ni, NiO and Cu, Cu2O.
Resumo:
Direct torque control (DTC) is a new control method for rotating field electrical machines. DTC controls directly the motor stator flux linkage with the stator voltage, and no stator current controllers are used. With the DTC method very good torque dynamics can be achieved. Until now, DTC has been applied to asynchronous motor drives. The purpose of this work is to analyse the applicability of DTC to electrically excited synchronous motor drives. Compared with asynchronous motor drives, electrically excited synchronous motor drives require an additional control for the rotor field current. The field current control is called excitation control in this study. The dependence of the static and dynamic performance of DTC synchronous motor drives on the excitation control has been analysed and a straightforward excitation control method has been developed and tested. In the field weakening range the stator flux linkage modulus must be reduced in order to keep the electro motive force of the synchronous motor smaller than the stator voltage and in order to maintain a sufficient voltage reserve. The dynamic performance of the DTC synchronous motor drive depends on the stator flux linkage modulus. Another important factor for the dynamic performance in the field weakening range is the excitation control. The field weakening analysis considers both dependencies. A modified excitation control method, which maximises the dynamic performance in the field weakening range, has been developed. In synchronous motor drives the load angle must be kept in a stabile working area in order to avoid loss of synchronism. The traditional vector control methods allow to adjust the load angle of the synchronous motor directly by the stator current control. In the DTC synchronous motor drive the load angle is not a directly controllable variable, but it is formed freely according to the motor’s electromagnetic state and load. The load angle can be limited indirectly by limiting the torque reference. This method is however parameter sensitive and requires a safety margin between the theoretical torque maximum and the actual torque limit. The DTC modulation principle allows however a direct load angle adjustment without any current control. In this work a direct load angle control method has been developed. The method keeps the drive stabile and allows the maximal utilisation of the drive without a safety margin in the torque limitation.
Resumo:
Innovation has been widely recognized as an important driver of firm competitiveness, and the firm’s internal research and development (R&D) activities are often considered to have a critical role in innovation activities. Internal R&D is, however, not the source of innovation as firms may tap into knowledge necessary for innovation also through various types of sourcing agreements or by collaborating with other organizations. The objective of this study is to analyze the way firms go about organizing efficiently their innovation boundaries. Within this context, the analysis is focused, firstly, on the relation between innovation boundaries and firm innovation performance and, secondly, on the factors explaining innovation boundary organization. The innovation literature recognizes that the sources of innovation depend on the nature of technology but does not offer a sufficient tool for analyzing innovation boundary options and their efficiency. Thus, this study suggests incorporating insights from transaction cost economics (TCE) complemented with dynamic governance costs and benefits into the analysis. The thesis consists of two parts. The first part introduces the background of the study, research objectives, an overview of the empirical studies, and the general conclusions of the study. The second part is formed of five publications. The overall results firstly indicate that although the relation between firm innovation boundary options is partly industry sector-specific, the firm level search strategies and knowledge transfer capabilities are important for innovation performance independently of the sector. Secondly, the results show that the attributes suggested by TCE alone do not offer a sufficient explanation of innovation boundary selection, especially under conditions of high levels of (radical) uncertainty. Based on the results, the dynamic governance cost and benefit framework complements the static TCE when firm innovation boundaries are scrutinized.
Resumo:
This thesis investigates the effectiveness of time-varying hedging during the financial crisis of 2007 and the European Debt Crisis of 2010. In addition, the seven test economies are part of the European Monetary Union and these countries are in different economical states. Time-varying hedge ratio was constructed using conditional variances and correlations, which were created by using multivariate GARCH models. Here we have used three different underlying portfolios: national equity markets, government bond markets and the combination of these two. These underlying portfolios were hedged by using credit default swaps. Empirical part includes the in-sample and out-of-sample analysis, which are constructed by using constant and dynamic models. Moreover, almost in every case dynamic models outperform the constant ones in the determination of the hedge ratio. We could not find any statistically significant evidence to support the use of asymmetric dynamic conditional correlation model. In addition, our findings are in line with prior literature and support the use of time-varying hedge ratio. Finally, we found that in some cases credit default swaps are not suitable instruments for hedging and they act more as a speculative instrument.
Resumo:
Family businesses are among the longest-lived most prevalent institutions in the world and they are an important source of economic development and growth. Ownership is a key to the business life of the firm and also one main key in family business definition. There is only a little portfolio entrepreneurship or portfolio business research within family business context. The absence of empirical evidence on the long-term relationship between family ownership and portfolio development presents an important gap in the family business literature. This study deals with the family business ownership changes and the development of portfolios in the family business and it is positioned in to the conversation of family business, growth, ownership, management and strategy. This study contributes and expands the existing body of theory on family business and ownership. From the theoretical point of view this study combines insights from the fields of portfolio entrepreneurship, ownership, and family business and integrate them. This crossfertilization produces interesting empirical and theoretical findings that can constitute a basis for solid contributions to the understanding of ownership dynamics and portfolio entrepreneurship in family firms. The research strategy chosen for this study represents longitudinal, qualitative, hermeneutic, and deductive approaches.The empirical part of study is using a case study approach with embedded design, that is, multiple levels of analysis within a single study. The study consists of two cases and it begins with a pilot case which will form a preunderstanding on the phenomenon. Pilot case develops the methodology approach to build in the main case and the main case will deepen the understanding of the phenomenon. This study develops and tests a research method of family business portfolio development focusing on investigating how ownership changes are influencing to the family business structures over time. This study reveals the linkages between dimensions of ownership and how they give rise to portfolio business development within the context of the family business. The empirical results of the study suggest that family business ownership is dynamic and owners are using ownership as a tool for creating business portfolios.
Resumo:
Rapid ongoing evolution of multiprocessors will lead to systems with hundreds of processing cores integrated in a single chip. An emerging challenge is the implementation of reliable and efficient interconnection between these cores as well as other components in the systems. Network-on-Chip is an interconnection approach which is intended to solve the performance bottleneck caused by traditional, poorly scalable communication structures such as buses. However, a large on-chip network involves issues related to congestion problems and system control, for instance. Additionally, faults can cause problems in multiprocessor systems. These faults can be transient faults, permanent manufacturing faults, or they can appear due to aging. To solve the emerging traffic management, controllability issues and to maintain system operation regardless of faults a monitoring system is needed. The monitoring system should be dynamically applicable to various purposes and it should fully cover the system under observation. In a large multiprocessor the distances between components can be relatively long. Therefore, the system should be designed so that the amount of energy-inefficient long-distance communication is minimized. This thesis presents a dynamically clustered distributed monitoring structure. The monitoring is distributed so that no centralized control is required for basic tasks such as traffic management and task mapping. To enable extensive analysis of different Network-on-Chip architectures, an in-house SystemC based simulation environment was implemented. It allows transaction level analysis without time consuming circuit level implementations during early design phases of novel architectures and features. The presented analysis shows that the dynamically clustered monitoring structure can be efficiently utilized for traffic management in faulty and congested Network-on-Chip-based multiprocessor systems. The monitoring structure can be also successfully applied for task mapping purposes. Furthermore, the analysis shows that the presented in-house simulation environment is flexible and practical tool for extensive Network-on-Chip architecture analysis.
Resumo:
This thesis presents a one-dimensional, semi-empirical dynamic model for the simulation and analysis of a calcium looping process for post-combustion CO2 capture. Reduction of greenhouse emissions from fossil fuel power production requires rapid actions including the development of efficient carbon capture and sequestration technologies. The development of new carbon capture technologies can be expedited by using modelling tools. Techno-economical evaluation of new capture processes can be done quickly and cost-effectively with computational models before building expensive pilot plants. Post-combustion calcium looping is a developing carbon capture process which utilizes fluidized bed technology with lime as a sorbent. The main objective of this work was to analyse the technological feasibility of the calcium looping process at different scales with a computational model. A one-dimensional dynamic model was applied to the calcium looping process, simulating the behaviour of the interconnected circulating fluidized bed reactors. The model incorporates fundamental mass and energy balance solvers to semi-empirical models describing solid behaviour in a circulating fluidized bed and chemical reactions occurring in the calcium loop. In addition, fluidized bed combustion, heat transfer and core-wall layer effects were modelled. The calcium looping model framework was successfully applied to a 30 kWth laboratory scale and a pilot scale unit 1.7 MWth and used to design a conceptual 250 MWth industrial scale unit. Valuable information was gathered from the behaviour of a small scale laboratory device. In addition, the interconnected behaviour of pilot plant reactors and the effect of solid fluidization on the thermal and carbon dioxide balances of the system were analysed. The scale-up study provided practical information on the thermal design of an industrial sized unit, selection of particle size and operability in different load scenarios.
Resumo:
Positron Emission Tomography (PET) using 18F-FDG is playing a vital role in the diagnosis and treatment planning of cancer. However, the most widely used radiotracer, 18F-FDG, is not specific for tumours and can also accumulate in inflammatory lesions as well as normal physiologically active tissues making diagnosis and treatment planning complicated for the physicians. Malignant, inflammatory and normal tissues are known to have different pathways for glucose metabolism which could possibly be evident from different characteristics of the time activity curves from a dynamic PET acquisition protocol. Therefore, we aimed to develop new image analysis methods, for PET scans of the head and neck region, which could differentiate between inflammation, tumour and normal tissues using this functional information within these radiotracer uptake areas. We developed different dynamic features from the time activity curves of voxels in these areas and compared them with the widely used static parameter, SUV, using Gaussian Mixture Model algorithm as well as K-means algorithm in order to assess their effectiveness in discriminating metabolically different areas. Moreover, we also correlated dynamic features with other clinical metrics obtained independently of PET imaging. The results show that some of the developed features can prove to be useful in differentiating tumour tissues from inflammatory regions and some dynamic features also provide positive correlations with clinical metrics. If these proposed methods are further explored then they can prove to be useful in reducing false positive tumour detections and developing real world applications for tumour diagnosis and contouring.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.