908 resultados para Data Structure Operations
Resumo:
Precision of released figures is not only an important quality feature of official statistics,it is also essential for a good understanding of the data. In this paper we show a casestudy of how precision could be conveyed if the multivariate nature of data has to betaken into account. In the official release of the Swiss earnings structure survey, the totalsalary is broken down into several wage components. We follow Aitchison's approachfor the analysis of compositional data, which is based on logratios of components. Wefirst present diferent multivariate analyses of the compositional data whereby the wagecomponents are broken down by economic activity classes. Then we propose a numberof ways to assess precision
Resumo:
As stated in Aitchison (1986), a proper study of relative variation in a compositional data set should be based on logratios, and dealing with logratios excludes dealing with zeros. Nevertheless, it is clear that zero observations might be present in real data sets, either because the corresponding part is completelyabsent –essential zeros– or because it is below detection limit –rounded zeros. Because the second kind of zeros is usually understood as “a trace too small to measure”, it seems reasonable to replace them by a suitable small value, and this has been the traditional approach. As stated, e.g. by Tauber (1999) and byMartín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000), the principal problem in compositional data analysis is related to rounded zeros. One should be careful to use a replacement strategy that does not seriously distort the general structure of the data. In particular, the covariance structure of the involvedparts –and thus the metric properties– should be preserved, as otherwise further analysis on subpopulations could be misleading. Following this point of view, a non-parametric imputation method isintroduced in Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000). This method is analyzed in depth by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2003) where it is shown that thetheoretical drawbacks of the additive zero replacement method proposed in Aitchison (1986) can be overcome using a new multiplicative approach on the non-zero parts of a composition. The new approachhas reasonable properties from a compositional point of view. In particular, it is “natural” in the sense thatit recovers the “true” composition if replacement values are identical to the missing values, and it is coherent with the basic operations on the simplex. This coherence implies that the covariance structure of subcompositions with no zeros is preserved. As a generalization of the multiplicative replacement, in thesame paper a substitution method for missing values on compositional data sets is introduced
Resumo:
In this paper we analyse, using Monte Carlo simulation, the possible consequences of incorrect assumptions on the true structure of the random effects covariance matrix and the true correlation pattern of residuals, over the performance of an estimation method for nonlinear mixed models. The procedure under study is the well known linearization method due to Lindstrom and Bates (1990), implemented in the nlme library of S-Plus and R. Its performance is studied in terms of bias, mean square error (MSE), and true coverage of the associated asymptotic confidence intervals. Ignoring other criteria like the convenience of avoiding over parameterised models, it seems worst to erroneously assume some structure than do not assume any structure when this would be adequate.
Resumo:
The recently measured inclusive electron-proton cross section in the nucleon resonance region, performed with the CLAS detector at the Thomas Jefferson Laboratory, has provided new data for the nucleon structure function F2 with previously unavailable precision. In this paper we propose a description of these experimental data based on a Regge-dual model for F2. The basic inputs in the model are nonlinear complex Regge trajectories producing both isobar resonances and a smooth background. The model is tested against the experimental data, and the Q2 dependence of the moments is calculated. The fitted model for the structure function (inclusive cross section) is a limiting case of the more general scattering amplitude equally applicable to deeply virtual Compton scattering. The connection between the two is discussed.
Resumo:
Numerous sources of evidence point to the fact that heterogeneity within the Earth's deep crystalline crust is complex and hence may be best described through stochastic rather than deterministic approaches. As seismic reflection imaging arguably offers the best means of sampling deep crustal rocks in situ, much interest has been expressed in using such data to characterize the stochastic nature of crustal heterogeneity. Previous work on this problem has shown that the spatial statistics of seismic reflection data are indeed related to those of the underlying heterogeneous seismic velocity distribution. As of yet, however, the nature of this relationship has remained elusive due to the fact that most of the work was either strictly empirical or based on incorrect methodological approaches. Here, we introduce a conceptual model, based on the assumption of weak scattering, that allows us to quantitatively link the second-order statistics of a 2-D seismic velocity distribution with those of the corresponding processed and depth-migrated seismic reflection image. We then perform a sensitivity study in order to investigate what information regarding the stochastic model parameters describing crustal velocity heterogeneity might potentially be recovered from the statistics of a seismic reflection image using this model. Finally, we present a Monte Carlo inversion strategy to estimate these parameters and we show examples of its application at two different source frequencies and using two different sets of prior information. Our results indicate that the inverse problem is inherently non-unique and that many different combinations of the vertical and lateral correlation lengths describing the velocity heterogeneity can yield seismic images with the same 2-D autocorrelation structure. The ratio of all of these possible combinations of vertical and lateral correlation lengths, however, remains roughly constant which indicates that, without additional prior information, the aspect ratio is the only parameter describing the stochastic seismic velocity structure that can be reliably recovered.
Resumo:
We present the first density model of Stromboli volcano (Aeolian Islands, Italy) obtained by simultaneously inverting land-based (543) and sea-surface (327) relative gravity data. Modern positioning technology, a 1 x 1 m digital elevation model, and a 15 x 15 m bathymetric model made it possible to obtain a detailed 3-D density model through an iteratively reweighted smoothness-constrained least-squares inversion that explained the land-based gravity data to 0.09 mGal and the sea-surface data to 5 mGal. Our inverse formulation avoids introducing any assumptions about density magnitudes. At 125 m depth from the land surface, the inferred mean density of the island is 2380 kg m(-3), with corresponding 2.5 and 97.5 percentiles of 2200 and 2530 kg m-3. This density range covers the rock densities of new and previously published samples of Paleostromboli I, Vancori, Neostromboli and San Bartolo lava flows. High-density anomalies in the central and southern part of the island can be related to two main degassing faults crossing the island (N41 and NM) that are interpreted as preferential regions of dyke intrusions. In addition, two low-density anomalies are found in the northeastern part and in the summit area of the island. These anomalies seem to be geographically related with past paroxysmal explosive phreato-magmatic events that have played important roles in the evolution of Stromboli Island by forming the Scari caldera and the Neostromboli crater, respectively. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Sales and operations research publications have increased significantly in the last decades. The concept of sales and operations planning (S&OP) has gained increased recognition and has been put forward as the area within Supply Chain Management (SCM). Development of S&OP is based on the need for determining future actions, both for sales and operations, since off-shoring, outsourcing, complex supply chains and extended lead times make challenges for responding to changes in the marketplace when they occur. Order intake of the case company has grown rapidly during the last years. Along with the growth, new challenges considering data management and information flow have arisen due to increasing customer orders. To manage these challenges, case company has implemented S&OP process, though initial process is in early stage and due to this, the process is not managing the increased customer orders adequately. Thesis objective is to explore extensively the S&OP process content of the case company and give further recommendations. Objectives are categorized into six different groups, to clarify the purpose of this thesis. Qualitative research methods used are active participant observation, qualitative interviews, enquiry, education, and a workshop. It is notable that demand planning was felt as cumbersome, so it is typically the biggest challenge in S&OP process. More proactive the sales forecasting can be, more expanded the time horizon of operational planning will turn out. S&OP process is 60 percent change management, 30 percent process development and 10 percent technology. The change management and continuous improvement can sometimes be arduous and set as secondary. It is important that different people are required to improve the process and the process is constantly evaluated. As well as, process governance is substantially in a central role and it has to be managed consciously. Generally, S&OP process was seen important and all the stakeholders were committed to the process. Particular sections were experienced more important than others, depending on the stakeholders’ point of views. Recommendations to objective groups are evaluated by the achievable benefit and resource requirement. The urgent and easily implemented improvement recommendations should be executed firstly. Next steps are to develop more coherent process structure and refine cost awareness. Afterwards demand planning, supply planning, and reporting should be developed more profoundly. For last, information technology system should be implemented to support the process phases.
Resumo:
After sales business is an effective way to create profit and increase customer satisfaction in manufacturing companies. Despite this, some special business characteristics that are linked to these functions, make it exceptionally challenging in its own way. This Master’s Thesis examines the current situation of the data and inventory management in the case company regarding possibilities and challenges related to the consolidation of current business operations. The research examines process steps, procedures, data requirements, data mining practices and data storage management of spare part sales process, whereas the part focusing on inventory management is reviewing the current stock value and examining current practices and operational principles. There are two global after sales units which supply spare parts and issues reviewed in this study are examined from both units’ perspective. The analysis is focused on the operations of that unit where functions would be centralized by default, if change decisions are carried out. It was discovered that both data and inventory management include clear shortcomings, which result from lack of internal instructions and established processes as well as lack of cooperation with other stakeholders related to product’s lifecycle. The main product of data management was a guideline for consolidating the functions, tailored for the company’s needs. Additionally, potentially scrapped spare part were listed and a proposal of inventory management instructions was drafted. If the suggested spare part materials will be scrapped, stock value will decrease 46 percent. A guideline which was reviewed and commented in this thesis was chosen as the basis of the inventory management instructions.
Resumo:
La thèse vise à analyser la structure des échanges transnationaux de cocaïne, d’héroïne et de marijuana. Partant de la perspective des systèmes-mondes, l’hypothèse que le trafic de drogues forme un système inverse au commerce légal est développée. Les outils de l’analyse de réseaux sont appliqués aux échanges de drogues entre pays. La thèse s’appuie sur deux sources de données complémentaires. La première est une banque d’informations uniques compilées par l’United Nations Office on Drugs and Crime (UNODC) sur les saisies d’importance effectuées dans le monde entre 1998 et 2007 (n = 47629). Ces données sont complétées par les informations contenues dans une dizaine de rapports publiés par des organismes internationaux de surveillance du trafic de drogues. Les réseaux d’échanges dirigés construits à partir de ces données permettent d’examiner l’étendue du trafic entre la plupart des pays du monde et de qualifier leur implication individuelle. Les chapitres 3 et 4 portent sur la structure du trafic elle-même. Dans un premier temps, les différents rôles joués par les pays et les caractéristiques des trois marchés de drogues sont comparés. Les quantités en circulation et les taux d’interception sont estimés pour les 16 régions géographiques définies par l’UNODC. Dans un deuxième temps, leurs caractéristiques structurelles sont comparées à celles des marchés légaux. Il en ressort que les marchés de drogues sont beaucoup moins denses et que les pays périphériques y jouent un rôle plus prononcé. L’inégalité des échanges caractérise les deux économies, mais leurs structures sont inversées. Le chapitre 5 propose une analyse de la principale source de risque pour les trafiquants, les saisies de drogues. Les données compilées permettent de démontrer que les saisies policières de drogues agrégées au niveau des pays sont principalement indicatrices du volume de trafic. L’éventuel biais lié aux pressions policières est négligeable pour les quantités saisies, mais plus prononcé pour le nombre de saisies. Les organismes de contrôle seraient donc plus à même de moduler leurs activités que les retombées éventuelles. Les résultats suggèrent aussi que les trafiquants adoptent des stratégies diverses pour limiter les pertes liées aux saisies. Le chapitre 6 s’attarde à l’impact de la structure sur le prix et la valeur des drogues. Le prix de gros varie considérablement d’un pays à l’autre et d’une drogue à l’autre. Ces variations s’expliquent par les contraintes auxquelles font face les trafiquants dans le cadre de leurs activités. D’une part, la valeur des drogues augmente plus rapidement lorsqu’elles sont destinées à des pays où les risques et les coûts d’importation sont élevés. D’autre part, la majoration des prix est plus prononcée lorsque les échanges sont dirigés vers des pays du cœur de l’économie légale. De nouveau, les rôles sont inversés : les pays généralement avantagés dépendent des plus désavantagés, et les pays pauvres en profitent pour exploiter les riches.
Resumo:
The major objective of the thesis is essentially to evolve and apply certain computational procedures to evaluate the structure and properties of some simple polyatomic molecules making use of spectroscopic data available from the literature. It must be said that though there is dwindling interest in recent times in such analyses, there exists tremendous scope and utility for attempting such calculations as the precision and reliability of'experimental techniques in spectroscopy have increased vastly due to enormous sophistication of the instruments used for these measurements. In the present thesis an attempt is made to extract maximum amount of information regarding the geometrical structure and interatmic forces of simple molecules from the experimental data on microwave and infrared spectra of these molecules
Resumo:
Lasers play an important role for medical, sensoric and data storage devices. This thesis is focused on design, technology development, fabrication and characterization of hybrid ultraviolet Vertical-Cavity Surface-Emitting Lasers (UV VCSEL) with organic laser-active material and inorganic distributed Bragg reflectors (DBR). Multilayer structures with different layer thicknesses, refractive indices and absorption coefficients of the inorganic materials were studied using theoretical model calculations. During the simulations the structure parameters such as materials and thicknesses have been varied. This procedure was repeated several times during the design optimization process including also the feedback from technology and characterization. Two types of VCSEL devices were investigated. The first is an index coupled structure consisting of bottom and top DBR dielectric mirrors. In the space in between them is the cavity, which includes active region and defines the spectral gain profile. In this configuration the maximum electrical field is concentrated in the cavity and can destroy the chemical structure of the active material. The second type of laser is a so called complex coupled VCSEL. In this structure the active material is placed not only in the cavity but also in parts of the DBR structure. The simulations show that such a distribution of the active material reduces the required pumping power for reaching lasing threshold. High efficiency is achieved by substituting the dielectric material with high refractive index for the periods closer to the cavity. The inorganic materials for the DBR mirrors have been deposited by Plasma- Enhanced Chemical Vapor Deposition (PECVD) and Dual Ion Beam Sputtering (DIBS) machines. Extended optimizations of the technological processes have been performed. All the processes are carried out in a clean room Class 1 and Class 10000. The optical properties and the thicknesses of the layers are measured in-situ by spectroscopic ellipsometry and spectroscopic reflectometry. The surface roughness is analyzed by atomic force microscopy (AFM) and images of the devices are taken with scanning electron microscope (SEM). The silicon dioxide (SiO2) and silicon nitride (Si3N4) layers deposited by the PECVD machine show defects of the material structure and have higher absorption in the ultra violet range compared to ion beam deposition (IBD). This results in low reflectivity of the DBR mirrors and also reduces the optical properties of the VCSEL devices. However PECVD has the advantage that the stress in the layers can be tuned and compensated, in contrast to IBD at the moment. A sputtering machine Ionsys 1000 produced by Roth&Rau company, is used for the deposition of silicon dioxide (SiO2), silicon nitride (Si3N4), aluminum oxide (Al2O3) and zirconium dioxide (ZrO2). The chamber is equipped with main (sputter) and assisted ion sources. The dielectric materials were optimized by introducing additional oxygen and nitrogen into the chamber. DBR mirrors with different material combinations were deposited. The measured optical properties of the fabricated multilayer structures show an excellent agreement with the results of theoretical model calculations. The layers deposited by puttering show high compressive stress. As an active region a novel organic material with spiro-linked molecules is used. Two different materials have been evaporated by utilizing a dye evaporation machine in the clean room of the department Makromolekulare Chemie und Molekulare Materialien (mmCmm). The Spiro-Octopus-1 organic material has a maximum emission at the wavelength λemission = 395 nm and the Spiro-Pphenal has a maximum emission at the wavelength λemission = 418 nm. Both of them have high refractive index and can be combined with low refractive index materials like silicon dioxide (SiO2). The sputtering method shows excellent optical quality of the deposited materials and high reflection of the multilayer structures. The bottom DBR mirrors for all VCSEL devices were deposited by the DIBS machine, whereas the top DBR mirror deposited either by PECVD or by combination of PECVD and DIBS. The fabricated VCSEL structures were optically pumped by nitrogen laser at wavelength λpumping = 337 nm. The emission was measured by spectrometer. A radiation of the VCSEL structure at wavelength 392 nm and 420 nm is observed.
Resumo:
The component structure of a 34-item scale measuring different aspects of job satisfaction was investigated among extension officers in North West Province, South Africa. A simple random sampling technique was used to select 40 extension officers from which data were collected. A structured questionnaire consisting of 34 job satisfaction and 10 personal characteristic items was administered to the extension officers. Items on job satisfaction were measured at interval level and analyzedwith Principal ComponentAnalysis. Most of the respondents (82.5%) weremales, between 40 to 45 years, 85% were married and 87.5% had a diploma as their educational qualification. Furthermore, 54% of the households size between 4 to 6 persons, whereas 75% were Christians. The majority of the extension officers lived in their job area (82.5), while 80% covered at least 3 communities and 3 farmer groups. In terms of number of farmers covered, only 40% of the extension officers covered more than 500 farmers and 45% travelled more than 40 km to reach their farmers. From the job satisfaction items 9 components were extracted to show areas for job satisfaction among extension officers. These were in-service training, research policies, communicating recommended practices, financial support for self and family, quality of technical help, opportunity to advance education, management and control of operations, rewarding system and sanctions. The results have several implications for motivating extension officers for high job performance especially with large number of clients and small number of extension agents.
Resumo:
As stated in Aitchison (1986), a proper study of relative variation in a compositional data set should be based on logratios, and dealing with logratios excludes dealing with zeros. Nevertheless, it is clear that zero observations might be present in real data sets, either because the corresponding part is completely absent –essential zeros– or because it is below detection limit –rounded zeros. Because the second kind of zeros is usually understood as “a trace too small to measure”, it seems reasonable to replace them by a suitable small value, and this has been the traditional approach. As stated, e.g. by Tauber (1999) and by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000), the principal problem in compositional data analysis is related to rounded zeros. One should be careful to use a replacement strategy that does not seriously distort the general structure of the data. In particular, the covariance structure of the involved parts –and thus the metric properties– should be preserved, as otherwise further analysis on subpopulations could be misleading. Following this point of view, a non-parametric imputation method is introduced in Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000). This method is analyzed in depth by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2003) where it is shown that the theoretical drawbacks of the additive zero replacement method proposed in Aitchison (1986) can be overcome using a new multiplicative approach on the non-zero parts of a composition. The new approach has reasonable properties from a compositional point of view. In particular, it is “natural” in the sense that it recovers the “true” composition if replacement values are identical to the missing values, and it is coherent with the basic operations on the simplex. This coherence implies that the covariance structure of subcompositions with no zeros is preserved. As a generalization of the multiplicative replacement, in the same paper a substitution method for missing values on compositional data sets is introduced