970 resultados para Univariate Analysis box-jenkins methodology
Resumo:
The thesis studies the representations of different elements of contemporary work as present in Knowledge Management (KM). KM is approached as management discourse that is seen to affect and influence managerial practices in organizations. As representatives of KM discourse four journal articles are analyzed, using the methodology of Critical Discourse Analysis and the framework of Critical Management Studies, with a special emphasis on the question of structure and agency. The results of the analysis reveal that structural elements such as information technology and organizational structures are strongly present in the most influential KM representations, making their improvement also a desirable course of action for managers. In contrast agentic properties are not in a central role, they are subjugated to structural constraints of varying kind and degree. The thesis claims that one such constraint is KM discourse itself, influencing managerial and organizational choices and decision making. The thesis concludes that the way human beings are represented, studied and treated in management studies such as KM needs to be re-examined. Pro gradu-tutkielmassa analysoidaan työhön ja sen tekijään liittyviä representaatioita Tietojohtamisen kirjallisuudessa. Tietojohtamista tarkastellaan liikkeenjohdollisena diskurssina, jolla nähdään olevan vaikutus organisaatioiden päätöksentekoon ja toimintaan. Tutkielmassa analysoidaan neljä Tietojohtamisen tieteellistä artikkelia, käyttäen metodina kriittistä diskurssianalyysiä. Tutkielman viitekehyksenä on kriittinen liikkeenjohdon tutkimus. Lisäksi työssä pohditaan kysymystä rakenteen ja toimijan välisestä vuorovaikutuksesta. Tutkielman analyysi paljastaa, että tietojohtamisen vaikutusvaltaisimmat representaatiot painottavat rakenteellisia tekijöitä, kuten informaatioteknologiaa ja organisaatiorakenteita. Tämän seurauksena mm. panostukset em. tekijöihin nähdään organisaatioissa toivottavana toimintana. Vastaavasti representaatiot jotka painottavat yksilöitä ja toimintaa ovat em. tekijöille alisteisessa asemassa. Tapaa, jolla yksilöitä kuvataan ja käsitellään Tietojohtamisen diskurssissa, tulisikin laajentaa ja monipuolistaa.
Resumo:
«Quel est l'âge de cette trace digitale?» Cette question est relativement souvent soulevée au tribunal ou lors d'investigations, lorsque la personne suspectée admet avoir laissé ses empreintes digitales sur une scène de crime mais prétend l'avoir fait à un autre moment que celui du crime et pour une raison innocente. Toutefois, aucune réponse ne peut actuellement être donnée à cette question, puisqu'aucune méthodologie n'est pour l'heure validée et acceptée par l'ensemble de la communauté forensique. Néanmoins, l'inventaire de cas américains conduit dans cette recherche a montré que les experts fournissent tout de même des témoignages au tribunal concernant l'âge de traces digitales, même si ceux-‐ci sont majoritairement basés sur des paramètres subjectifs et mal documentés. Il a été relativement aisé d'accéder à des cas américains détaillés, ce qui explique le choix de l'exemple. Toutefois, la problématique de la datation des traces digitales est rencontrée dans le monde entier, et le manque de consensus actuel dans les réponses données souligne la nécessité d'effectuer des études sur le sujet. Le but de la présente recherche est donc d'évaluer la possibilité de développer une méthode de datation objective des traces digitales. Comme les questions entourant la mise au point d'une telle procédure ne sont pas nouvelles, différentes tentatives ont déjà été décrites dans la littérature. Cette recherche les a étudiées de manière critique, et souligne que la plupart des méthodologies reportées souffrent de limitations prévenant leur utilisation pratique. Néanmoins, certaines approches basées sur l'évolution dans le temps de composés intrinsèques aux résidus papillaires se sont montrées prometteuses. Ainsi, un recensement détaillé de la littérature a été conduit afin d'identifier les composés présents dans les traces digitales et les techniques analytiques capables de les détecter. Le choix a été fait de se concentrer sur les composés sébacés détectés par chromatographie gazeuse couplée à la spectrométrie de masse (GC/MS) ou par spectroscopie infrarouge à transformée de Fourier. Des analyses GC/MS ont été menées afin de caractériser la variabilité initiale de lipides cibles au sein des traces digitales d'un même donneur (intra-‐variabilité) et entre les traces digitales de donneurs différents (inter-‐variabilité). Ainsi, plusieurs molécules ont été identifiées et quantifiées pour la première fois dans les résidus papillaires. De plus, il a été déterminé que l'intra-‐variabilité des résidus était significativement plus basse que l'inter-‐variabilité, mais que ces deux types de variabilité pouvaient être réduits en utilisant différents pré-‐ traitements statistiques s'inspirant du domaine du profilage de produits stupéfiants. Il a également été possible de proposer un modèle objectif de classification des donneurs permettant de les regrouper dans deux classes principales en se basant sur la composition initiale de leurs traces digitales. Ces classes correspondent à ce qui est actuellement appelé de manière relativement subjective des « bons » ou « mauvais » donneurs. Le potentiel d'un tel modèle est élevé dans le domaine de la recherche en traces digitales, puisqu'il permet de sélectionner des donneurs représentatifs selon les composés d'intérêt. En utilisant la GC/MS et la FTIR, une étude détaillée a été conduite sur les effets de différents facteurs d'influence sur la composition initiale et le vieillissement de molécules lipidiques au sein des traces digitales. Il a ainsi été déterminé que des modèles univariés et multivariés pouvaient être construits pour décrire le vieillissement des composés cibles (transformés en paramètres de vieillissement par pré-‐traitement), mais que certains facteurs d'influence affectaient ces modèles plus sérieusement que d'autres. En effet, le donneur, le substrat et l'application de techniques de révélation semblent empêcher la construction de modèles reproductibles. Les autres facteurs testés (moment de déposition, pression, température et illumination) influencent également les résidus et leur vieillissement, mais des modèles combinant différentes valeurs de ces facteurs ont tout de même prouvé leur robustesse dans des situations bien définies. De plus, des traces digitales-‐tests ont été analysées par GC/MS afin d'être datées en utilisant certains des modèles construits. Il s'est avéré que des estimations correctes étaient obtenues pour plus de 60 % des traces-‐tests datées, et jusqu'à 100% lorsque les conditions de stockage étaient connues. Ces résultats sont intéressants mais il est impératif de conduire des recherches supplémentaires afin d'évaluer les possibilités d'application de ces modèles dans des cas réels. Dans une perspective plus fondamentale, une étude pilote a également été effectuée sur l'utilisation de la spectroscopie infrarouge combinée à l'imagerie chimique (FTIR-‐CI) afin d'obtenir des informations quant à la composition et au vieillissement des traces digitales. Plus précisément, la capacité de cette technique à mettre en évidence le vieillissement et l'effet de certains facteurs d'influence sur de larges zones de traces digitales a été investiguée. Cette information a ensuite été comparée avec celle obtenue par les spectres FTIR simples. Il en a ainsi résulté que la FTIR-‐CI était un outil puissant, mais que son utilisation dans l'étude des résidus papillaires à des buts forensiques avait des limites. En effet, dans cette recherche, cette technique n'a pas permis d'obtenir des informations supplémentaires par rapport aux spectres FTIR traditionnels et a également montré des désavantages majeurs, à savoir de longs temps d'analyse et de traitement, particulièrement lorsque de larges zones de traces digitales doivent être couvertes. Finalement, les résultats obtenus dans ce travail ont permis la proposition et discussion d'une approche pragmatique afin d'aborder les questions de datation des traces digitales. Cette approche permet ainsi d'identifier quel type d'information le scientifique serait capable d'apporter aux enquêteurs et/ou au tribunal à l'heure actuelle. De plus, le canevas proposé décrit également les différentes étapes itératives de développement qui devraient être suivies par la recherche afin de parvenir à la validation d'une méthodologie de datation des traces digitales objective, dont les capacités et limites sont connues et documentées. -- "How old is this fingermark?" This question is relatively often raised in trials when suspects admit that they have left their fingermarks on a crime scene but allege that the contact occurred at a time different to that of the crime and for legitimate reasons. However, no answer can be given to this question so far, because no fingermark dating methodology has been validated and accepted by the whole forensic community. Nevertheless, the review of past American cases highlighted that experts actually gave/give testimonies in courts about the age of fingermarks, even if mostly based on subjective and badly documented parameters. It was relatively easy to access fully described American cases, thus explaining the origin of the given examples. However, fingermark dating issues are encountered worldwide, and the lack of consensus among the given answers highlights the necessity to conduct research on the subject. The present work thus aims at studying the possibility to develop an objective fingermark dating method. As the questions surrounding the development of dating procedures are not new, different attempts were already described in the literature. This research proposes a critical review of these attempts and highlights that most of the reported methodologies still suffer from limitations preventing their use in actual practice. Nevertheless, some approaches based on the evolution of intrinsic compounds detected in fingermark residue over time appear to be promising. Thus, an exhaustive review of the literature was conducted in order to identify the compounds available in the fingermark residue and the analytical techniques capable of analysing them. It was chosen to concentrate on sebaceous compounds analysed using gas chromatography coupled with mass spectrometry (GC/MS) or Fourier transform infrared spectroscopy (FTIR). GC/MS analyses were conducted in order to characterize the initial variability of target lipids among fresh fingermarks of the same donor (intra-‐variability) and between fingermarks of different donors (inter-‐variability). As a result, many molecules were identified and quantified for the first time in fingermark residue. Furthermore, it was determined that the intra-‐variability of the fingermark residue was significantly lower than the inter-‐variability, but that it was possible to reduce both kind of variability using different statistical pre-‐ treatments inspired from the drug profiling area. It was also possible to propose an objective donor classification model allowing the grouping of donors in two main classes based on their initial lipid composition. These classes correspond to what is relatively subjectively called "good" or "bad" donors. The potential of such a model is high for the fingermark research field, as it allows the selection of representative donors based on compounds of interest. Using GC/MS and FTIR, an in-‐depth study of the effects of different influence factors on the initial composition and aging of target lipid molecules found in fingermark residue was conducted. It was determined that univariate and multivariate models could be build to describe the aging of target compounds (transformed in aging parameters through pre-‐ processing techniques), but that some influence factors were affecting these models more than others. In fact, the donor, the substrate and the application of enhancement techniques seemed to hinder the construction of reproducible models. The other tested factors (deposition moment, pressure, temperature and illumination) also affected the residue and their aging, but models combining different values of these factors still proved to be robust. Furthermore, test-‐fingermarks were analysed with GC/MS in order to be dated using some of the generated models. It turned out that correct estimations were obtained for 60% of the dated test-‐fingermarks and until 100% when the storage conditions were known. These results are interesting but further research should be conducted to evaluate if these models could be used in uncontrolled casework conditions. In a more fundamental perspective, a pilot study was also conducted on the use of infrared spectroscopy combined with chemical imaging in order to gain information about the fingermark composition and aging. More precisely, its ability to highlight influence factors and aging effects over large areas of fingermarks was investigated. This information was then compared with that given by individual FTIR spectra. It was concluded that while FTIR-‐ CI is a powerful tool, its use to study natural fingermark residue for forensic purposes has to be carefully considered. In fact, in this study, this technique does not yield more information on residue distribution than traditional FTIR spectra and also suffers from major drawbacks, such as long analysis and processing time, particularly when large fingermark areas need to be covered. Finally, the results obtained in this research allowed the proposition and discussion of a formal and pragmatic framework to approach the fingermark dating questions. It allows identifying which type of information the scientist would be able to bring so far to investigators and/or Justice. Furthermore, this proposed framework also describes the different iterative development steps that the research should follow in order to achieve the validation of an objective fingermark dating methodology, whose capacities and limits are well known and properly documented.
Resumo:
Deliberate fires appear to be borderless and timeless events creating a serious security problem. There have been many attempts to develop approaches to tackle this problem, but unfortunately acting effectively against deliberate fires has proven a complex challenge. This article reviews the current situation relating to deliberate fires: what do we know, how serious is the situation, how is it being dealt with, and what challenges are faced when developing a systematic and global methodology to tackle the issues? The repetitive nature of some types of deliberate fires will also be discussed. Finally, drawing on the reality of repetition within deliberate fires and encouraged by successes obtained in previous repetitive crimes (such as property crimes or drug trafficking), we will argue that the use of the intelligence process cycle as a framework to allow a follow-up and systematic analysis of fire events is a relevant approach. This is the first article of a series of three articles. This first part is introducing the context and discussing the background issues in order to provide a better underpinning knowledge to managers and policy makers planning on tackling this issue. The second part will present a methodology developed to detect and identify repetitive fire events from a set of data, and the third part will discuss the analyses of these data to produce intelligence.
Resumo:
In this work we present and analyze the application of an experience of Project Based Learning (PBL) in the matter of Physics II of the Industrial Design university degree (Girona University) during 2005-2006 courses. This methodology was applied to the Electrostatic and Direct Current subjects. Furthermore, evaluation and self evaluation results were shown and the academic results were compared with results obtained in the same subjects applying conventional teaching methods
Resumo:
Overall Equipment Effectiveness (OEE) is the key metric of operational excellence. OEE monitors the actual performance of equipment relative to its performance capabilities under optimal manufacturing conditions. It looks at the entire manufacturing environment measuring, in addition to the equipment availability, the production efficiency while the equipment is available to run products, as well as the efficiency loss that results from scrap, rework, and yield losses. The analysis of the OEE provides improvement opportunities for the operation. One of the tools used for OEE improvement is Six Sigma DMAIC methodology which is a set of practices originally developed to improve processes by eliminating defects. It asserts the continuous efforts to reduce variation in process outputs as key to business success, as well as the possibility of measurement, analysis, improvement and control of manufacturing and business processes. In the case of the Bottomer line AD2378 in Papsac Maghreb Casablanca plant, the OEE figures reached 48.65 %, which is below the accepted OEE group performance. This required immediate actions to achieve OEE improvement. This Master thesis focuses on the application of Six Sigma DMAIC methodology in the OEE improvement on the Bottomer Line AD2378 in Papsac Maghreb Casablanca plant. First, the Six Sigma DMAIC and OEE usage in operation measurement will be discussed. Afterwards, the different DMAIC phases will allow the identification of improvement focus, the identification of OEE low performance causes and the design of improvement solutions. These will be implemented to allow further tracking of improvement impact on the plant operations.
Resumo:
The objective of this study is to show that bone strains due to dynamic mechanical loading during physical activity can be analysed using the flexible multibody simulation approach. Strains within the bone tissue play a major role in bone (re)modeling. Based on previous studies, it has been shown that dynamic loading seems to be more important for bone (re)modeling than static loading. The finite element method has been used previously to assess bone strains. However, the finite element method may be limited to static analysis of bone strains due to the expensive computation required for dynamic analysis, especially for a biomechanical system consisting of several bodies. Further, in vivo implementation of strain gauges on the surfaces of bone has been used previously in order to quantify the mechanical loading environment of the skeleton. However, in vivo strain measurement requires invasive methodology, which is challenging and limited to certain regions of superficial bones only, such as the anterior surface of the tibia. In this study, an alternative numerical approach to analyzing in vivo strains, based on the flexible multibody simulation approach, is proposed. In order to investigate the reliability of the proposed approach, three 3-dimensional musculoskeletal models where the right tibia is assumed to be flexible, are used as demonstration examples. The models are employed in a forward dynamics simulation in order to predict the tibial strains during walking on a level exercise. The flexible tibial model is developed using the actual geometry of the subject’s tibia, which is obtained from 3 dimensional reconstruction of Magnetic Resonance Images. Inverse dynamics simulation based on motion capture data obtained from walking at a constant velocity is used to calculate the desired contraction trajectory for each muscle. In the forward dynamics simulation, a proportional derivative servo controller is used to calculate each muscle force required to reproduce the motion, based on the desired muscle contraction trajectory obtained from the inverse dynamics simulation. Experimental measurements are used to verify the models and check the accuracy of the models in replicating the realistic mechanical loading environment measured from the walking test. The predicted strain results by the models show consistency with literature-based in vivo strain measurements. In conclusion, the non-invasive flexible multibody simulation approach may be used as a surrogate for experimental bone strain measurement, and thus be of use in detailed strain estimation of bones in different applications. Consequently, the information obtained from the present approach might be useful in clinical applications, including optimizing implant design and devising exercises to prevent bone fragility, accelerate fracture healing and reduce osteoporotic bone loss.
Resumo:
BACKGROUND: Many publications report the prevalence of chronic kidney disease (CKD) in the general population. Comparisons across studies are hampered as CKD prevalence estimations are influenced by study population characteristics and laboratory methods. METHODS: For this systematic review, two researchers independently searched PubMed, MEDLINE and EMBASE to identify all original research articles that were published between 1 January 2003 and 1 November 2014 reporting the prevalence of CKD in the European adult general population. Data on study methodology and reporting of CKD prevalence results were independently extracted by two researchers. RESULTS: We identified 82 eligible publications and included 48 publications of individual studies for the data extraction. There was considerable variation in population sample selection. The majority of studies did not report the sampling frame used, and the response ranged from 10 to 87%. With regard to the assessment of kidney function, 67% used a Jaffe assay, whereas 13% used the enzymatic assay for creatinine determination. Isotope dilution mass spectrometry calibration was used in 29%. The CKD-EPI (52%) and MDRD (75%) equations were most often used to estimate glomerular filtration rate (GFR). CKD was defined as estimated GFR (eGFR) <60 mL/min/1.73 m(2) in 92% of studies. Urinary markers of CKD were assessed in 60% of the studies. CKD prevalence was reported by sex and age strata in 54 and 50% of the studies, respectively. In publications with a primary objective of reporting CKD prevalence, 39% reported a 95% confidence interval. CONCLUSIONS: The findings from this systematic review showed considerable variation in methods for sampling the general population and assessment of kidney function across studies reporting CKD prevalence. These results are utilized to provide recommendations to help optimize both the design and the reporting of future CKD prevalence studies, which will enhance comparability of study results.
Resumo:
Combining headspace (HS) sampling with a needle-trap device (NTD) to determine priority volatile organic compounds (VOCs) in water samples results in improved sensitivity and efficiency when compared to conventional static HS sampling. A 22 gauge stainless steel, 51-mm needle packed with Tenax TA and Carboxen 1000 particles is used as the NTD. Three different HS-NTD sampling methodologies are evaluated and all give limits of detection for the target VOCs in the ng L−1 range. Active (purge-and-trap) HS-NTD sampling is found to give the best sensitivity but requires exhaustive control of the sampling conditions. The use of the NTD to collect the headspace gas sample results in a combined adsorption/desorption mechanism. The testing of different temperatures for the HS thermostating reveals a greater desorption effect when the sample is allowed to diffuse, whether passively or actively, through the sorbent particles. The limits of detection obtained in the simplest sampling methodology, static HS-NTD (5 mL aqueous sample in 20 mL HS vials, thermostating at 50 °C for 30 min with agitation), are sufficiently low as to permit its application to the analysis of 18 priority VOCs in natural and waste waters. In all cases compounds were detected below regulated levels
Resumo:
A descriptive, exploratory study is presented based on a questionnaire regarding the following aspects of reflective learning: a) self-knowledge, b) relating experience to knowledge, c) self-reflection, and d) self-regulation of the learning processes. The questionnaire was completed by students studying four different degree courses (social education, environmental sciences, nursing, and psychology). Specifically, the objectives of a self-reported reflective learning questionnaire are: i) to determine students’ appraisal of reflective learning methodology with regard to their reflective learning processes, ii) to obtain evidence of the main difficulties encountered by students in integrating reflective learning methodologies into their reflective learning processes, and iii) to collect students’ perceptions regarding the main contributions of the reflective learning processes they have experienced
Resumo:
In any discipline, where uncertainty and variability are present, it is important to haveprinciples which are accepted as inviolate and which should therefore drive statisticalmodelling, statistical analysis of data and any inferences from such an analysis.Despite the fact that two such principles have existed over the last two decades andfrom these a sensible, meaningful methodology has been developed for the statisticalanalysis of compositional data, the application of inappropriate and/or meaninglessmethods persists in many areas of application. This paper identifies at least tencommon fallacies and confusions in compositional data analysis with illustrativeexamples and provides readers with necessary, and hopefully sufficient, arguments topersuade the culprits why and how they should amend their ways
Resumo:
Amber from a Lower Cretaceous outcrop at San Just, located in the Eastern Iberian Peninsula (Escucha Formation, Maestrat Basin), was investigated to evaluate its physico-chemical properties. Thermogravimetric (TG) and Differential Thermogravimetric (DTG) analyses, infra-red spectroscopy, elemental and C-isotope analyses were performed. Physico-chemical differences between the internal light nuclei and the peripheral darker portions of San Just amber can be attributed to processes of diagenetic alteration that preferentially took place in the external amber border colonized by microorganisms (fungi or bacteria) when the resin was still liquid or slightly polymerized. δ13Camber values of different pieces of the same sample, from the nucleus to the external part, are remarkably homogeneous, as are δ13Camber values of the darker peripheral portions and lighter inner parts of the same samples. Hence, neither invasive microorganisms, nor diagenetic alteration, changed the bulk isotopic composition of the amber. δ13C values of different amber samples range from -21.1 to -24 , as expected for C3 plant-derived material. C-isotope analysis, coupled to palaeobotanical, TG and DTG data and infra-red spectra, suggests that San Just amber was exuded by only one conifer species, belonging to either the Cheirolepidiaceae or Aracauriaceae, coniferous families probably living under stable palaeoenvironmental and palaeoecological conditions.
Resumo:
This paper uses the possibilities provided by the regression-based inequality decomposition (Fields, 2003) to explore the contribution of different explanatory factors to international inequality in CO2 emissions per capita. In contrast to previous emissions inequality decompositions, which were based on identity relationships (Duro and Padilla, 2006), this methodology does not impose any a priori specific relationship. Thus, it allows an assessment of the contribution to inequality of different relevant variables. In short, the paper appraises the relative contributions of affluence, sectoral composition, demographic factors and climate. The analysis is applied to selected years of the period 1993–2007. The results show the important (though decreasing) share of the contribution of demographic factors, as well as a significant contribution of affluence and sectoral composition.
Resumo:
The purpose of this thesis was to define how product carbon footprint analysis and its results can be used in company's internal development as well as in customer and interest group guidance, and how these factors are related to corporate social responsibility. From-cradle-to-gate carbon footprint was calculated for three products; Torino Whole grain barley, Torino Pearl barley, and Elovena Barley grit & oat bran, all of them made of Finnish barley. The carbon footprint of the Elovena product was used to determine carbon footprints for industrial kitchen cooked porridge portions. The basic calculation data was collected from several sources. Most of the data originated from Raisio Group's contractual farmers and Raisio Group's cultivation, processing and packaging specialists. Data from national and European literature and database sources was also used. The electricity consumption for porridge portions' carbon footprint calculations was determined with practical measurements. The carbon footprint calculations were conducted according to the ISO 14044 standard, and the PAS 2050 guide was also applied. A consequential functional unit was applied in porridge portions' carbon footprint calculations. Most of the emissions from barley products' life cycle originate from primary production. The nitrous oxide emissions from cultivated soil and the use and production of nitrogenous fertilisers contribute over 50% of products' carbon footprint. Torino Pearl barley has the highest carbon footprint due to the lowest processing output. The reductions in products' carbon footprint can be achieved with developments in cultivation and grain processing. The carbon footprint of porridge portion can be reduced by using domestically produced plant-based ingredients and by making the best possible use of the kettle. Carbon footprint calculation can be used to determine possible improvement points related to corporate environmental responsibility. Several improvement actions are related to economical and social responsibility through better raw material utilization and expense reductions.
Resumo:
This work proposes a sequential injection analysis (SIA) system for the spectrophotometric determination of norfloxacin (NOR) and ciprofloxacin (CIP) in pharmaceutical formulations. The methodology was based on the reaction of these drugs with p-(dimethylamino)cinnamaldehyde in micellar medium, producing orange colored products (λmax = 495 nm). Beer´s law was obeyed in the concentration range from 2.75x10-5 to 3.44x10-4 mol L-1 and 3.26x10-5 to 3.54x10-4 mol L-1 for NOR and CIP, respectively and sampling rate was 25 h-1. Commercial samples were analyzed and results obtained through the proposed method were in good agreement with those obtained using the reference procedure for a 95% confidence level.
Resumo:
Methane combustion was studied by the Westbrook and Dryer model. This well-established simplified mechanism is very useful in combustion science, for computational effort can be notably reduced. In the inversion procedure to be studied, rate constants are obtained from [CO] concentration data. However, when inherent experimental errors in chemical concentrations are considered, an ill-conditioned inverse problem must be solved for which appropriate mathematical algorithms are needed. A recurrent neural network was chosen due to its numerical stability and robustness. The proposed methodology was compared against Simplex and Levenberg-Marquardt, the most used methods for optimization problems.