933 resultados para Statistical factora analysis
Resumo:
Raw measurement data does not always immediately convey useful information, but applying mathematical statistical analysis tools into measurement data can improve the situation. Data analysis can offer benefits like acquiring meaningful insight from the dataset, basing critical decisions on the findings, and ruling out human bias through proper statistical treatment. In this thesis we analyze data from an industrial mineral processing plant with the aim of studying the possibility of forecasting the quality of the final product, given by one variable, with a model based on the other variables. For the study mathematical tools like Qlucore Omics Explorer (QOE) and Sparse Bayesian regression (SB) are used. Later on, linear regression is used to build a model based on a subset of variables that seem to have most significant weights in the SB model. The results obtained from QOE show that the variable representing the desired final product does not correlate with other variables. For SB and linear regression, the results show that both SB and linear regression models built on 1-day averaged data seriously underestimate the variance of true data, whereas the two models built on 1-month averaged data are reliable and able to explain a larger proportion of variability in the available data, making them suitable for prediction purposes. However, it is concluded that no single model can fit well the whole available dataset and therefore, it is proposed for future work to make piecewise non linear regression models if the same available dataset is used, or the plant to provide another dataset that should be collected in a more systematic fashion than the present data for further analysis.
Resumo:
This study presents an automatic, computer-aided analytical method called Comparison Structure Analysis (CSA), which can be applied to different dimensions of music. The aim of CSA is first and foremost practical: to produce dynamic and understandable representations of musical properties by evaluating the prevalence of a chosen musical data structure through a musical piece. Such a comparison structure may refer to a mathematical vector, a set, a matrix or another type of data structure and even a combination of data structures. CSA depends on an abstract systematic segmentation that allows for a statistical or mathematical survey of the data. To choose a comparison structure is to tune the apparatus to be sensitive to an exclusive set of musical properties. CSA settles somewhere between traditional music analysis and computer aided music information retrieval (MIR). Theoretically defined musical entities, such as pitch-class sets, set-classes and particular rhythm patterns are detected in compositions using pattern extraction and pattern comparison algorithms that are typical within the field of MIR. In principle, the idea of comparison structure analysis can be applied to any time-series type data and, in the music analytical context, to polyphonic as well as homophonic music. Tonal trends, set-class similarities, invertible counterpoints, voice-leading similarities, short-term modulations, rhythmic similarities and multiparametric changes in musical texture were studied. Since CSA allows for a highly accurate classification of compositions, its methods may be applicable to symbolic music information retrieval as well. The strength of CSA relies especially on the possibility to make comparisons between the observations concerning different musical parameters and to combine it with statistical and perhaps other music analytical methods. The results of CSA are dependent on the competence of the similarity measure. New similarity measures for tonal stability, rhythmic and set-class similarity measurements were proposed. The most advanced results were attained by employing the automated function generation – comparable with the so-called genetic programming – to search for an optimal model for set-class similarity measurements. However, the results of CSA seem to agree strongly, independent of the type of similarity function employed in the analysis.
Resumo:
A flow injection method for the quantitative analysis of vancomycin hydrochloride, C66H75Cl2N9O24.HCl (HVCM), based on the reaction with copper (II) ions, is presented. HVCM forms a lilac-blue complex with copper ions at pH≅4.5 in aqueous solutions, with maximum absorption at 555 nm. The detection limit was estimated to be about 8.5×10-5 mol L-1; the quantitation limit is about 2.5×10-4 mol L-1 and about 30 determinations can be performed in an hour. The accuracy of the method was tested through recovery procedures in presence of four different excipients, in the proportion 1:1 w/w. The results were compared with those obtained with the batch spectrophotometric and with the HPLC methods. Statistical comparison was done using the Student's procedure. Complete agreement was found at a 0.95 significance level between the proposed flow injection and the batch spectrophotometric methods, which present similar precision (RSD: 2.1 % vs. 1.9%).
Resumo:
Flow injection (FI) methodology, using diffuse reflectance in the visible region of the spectrum, for the analysis of total sulfur in the form of sulfate, precipitated in the form of barium sulfate, is presented. The method was applied to biodiesel, to plant leaves and to natural waters analysis. The analytical signal (S) correlates linearly with sulfate concentration (C) between 20 and 120 ppm, through the equation S=-1.138+0.0934 C (r = 0.9993). The experimentally observed limit of detection is about 10 ppm. The mean R.S.D. is about 3.0 %. Real samples containing sulfate were analyzed and the results obtained by the FI and by the reference batch turbidimetric method using the statistical Student's t-test and F-test were compared.
Resumo:
The Fed model is a widely used market valuation model. It is often used only on market analysis of the S&P 500 index as a shorthand measure for the attractiveness of equity, and as a timing device for allocating funds between equity and bonds. The Fed model assumes a fixed relationship between bond yield and earnings yield. This relationship is often assumed to be true in market valuation. In this paper we test the Fed model from historical perspective on the European markets. The markets of the United States are also includedfor comparison. The purpose of the tests is to determine if the Fed model and the underlying assumptions come true on different markets. The various tests are made on time-series data ranging from the year 1973 to the end of the year 2008. The statistical methods used are regressions analysis, cointegration analysis and Granger causality. The empirical results do not give strong support for the Fed model. The underlying relationships assumed by the Fed model are statistically not valid in most of the markets examined and therefore the model is not valid in valuation purposes generally. The results vary between the different markets which gives reason to suspect the general use of the Fed model in different market conditions and in different markets.
Resumo:
The combination of two low-cost classical procedures based on titrimetric techniques is presented for the determination of pyridoxine hydrochloride in pharmaceuticals samples. Initially some experiments were carried out aiming to determine both pKa1 and pKa2 values, being those compared to values of literature and theoretical procedures. Commercial samples containing pyridoxine hydrochloride were electrochemically analysed by exploiting their acid-base and precipitation reactions. Potentiometric titrations accomplished the reaction between the ionizable hydrogens present in pyridoxine hydrochloride, being NaOH used as titrant; while the conductimetric method was based on the chemical precipitation between the chloride of pyridoxine hydrochloride molecule and Ag+ ions from de silver nitrate, changing the conductivity of the solution. Both methods were applied to the same commercial samples leading to concordant results when compared by statistical tests (95 and 98% confidence levels). Recoveries ranging from 99.0 to 108.1% were observed, showing no significant interference on the results.
Resumo:
The objective of this master’s thesis was to study how customer relationships should be assessed and categorized in order to support customer relationship management (CRM) in the context of business-to-business (B2B) and professional services. This sophisticated and complex market is utilizing possibilities of CRM only rarely and even then the focus is often on technology. The theoretical part considered first CRM from the value chain point of view and then discussed the cyclical nature of relationships. The case study focused on B2B professional service firm. The data was collected from company databases and included the sample of 90 customers. The research was conducted in three phases first studying the age, then the service type of relationships and finally executing the cluster analysis. The data was analysed by statistical analysis program SAS Enterprise Guide. The results indicate that there are great differences between developments of customer relationships. While some relationships are dynamically growing and changing, most of customers are remaining constant. This implies expectations and requirements of customers are similarly divergent and relationships should be managed accordingly.
Resumo:
Systems biology is a new, emerging and rapidly developing, multidisciplinary research field that aims to study biochemical and biological systems from a holistic perspective, with the goal of providing a comprehensive, system- level understanding of cellular behaviour. In this way, it addresses one of the greatest challenges faced by contemporary biology, which is to compre- hend the function of complex biological systems. Systems biology combines various methods that originate from scientific disciplines such as molecu- lar biology, chemistry, engineering sciences, mathematics, computer science and systems theory. Systems biology, unlike “traditional” biology, focuses on high-level concepts such as: network, component, robustness, efficiency, control, regulation, hierarchical design, synchronization, concurrency, and many others. The very terminology of systems biology is “foreign” to “tra- ditional” biology, marks its drastic shift in the research paradigm and it indicates close linkage of systems biology to computer science. One of the basic tools utilized in systems biology is the mathematical modelling of life processes tightly linked to experimental practice. The stud- ies contained in this thesis revolve around a number of challenges commonly encountered in the computational modelling in systems biology. The re- search comprises of the development and application of a broad range of methods originating in the fields of computer science and mathematics for construction and analysis of computational models in systems biology. In particular, the performed research is setup in the context of two biolog- ical phenomena chosen as modelling case studies: 1) the eukaryotic heat shock response and 2) the in vitro self-assembly of intermediate filaments, one of the main constituents of the cytoskeleton. The range of presented approaches spans from heuristic, through numerical and statistical to ana- lytical methods applied in the effort to formally describe and analyse the two biological processes. We notice however, that although applied to cer- tain case studies, the presented methods are not limited to them and can be utilized in the analysis of other biological mechanisms as well as com- plex systems in general. The full range of developed and applied modelling techniques as well as model analysis methodologies constitutes a rich mod- elling framework. Moreover, the presentation of the developed methods, their application to the two case studies and the discussions concerning their potentials and limitations point to the difficulties and challenges one encounters in computational modelling of biological systems. The problems of model identifiability, model comparison, model refinement, model inte- gration and extension, choice of the proper modelling framework and level of abstraction, or the choice of the proper scope of the model run through this thesis.
Resumo:
This study aimed to evaluate the genetic variability among individuals of a base population of Eucalyptus grandis and to build a molecular marker database for the analyzed populations. The Eucalyptus grandis base population comprised 327 individuals from Coff's Harbour, Atherton and Rio Claro. A few plants came from other sites (Belthorpe MT. Pandanus, Kenilworth, Yabbra, etc.). Since this base population had a heterogeneous composition, the groups were divided according to geographic localization (latitude and longitude), and genetic breeding level. Thus, the influence of those two factors (geographic localization and genetic breeding level) on the genetic variability detected was discussed. The RAPD technique allowed the evaluation of 70 loci. The binary matrix was used to estimate the genetic similarity among individuals using Jaccard's Coefficient. Parametric statistical tests were used to compare within-group similarity of the means. The obtained results showed that the base population had wide genetic variability and a mean genetic similarity of 0.328. Sub-group 3 (wild materials from the Atherton region) showed mean genetic similarity of 0.318. S.P.A. (from Coff's Harbour region) had a mean genetic similarity of 0.322 and was found to be very important for maintenance of variation in the base population. This can be explained since the individuals from those groups accounted for most of the base population (48.3% for it). The base population plants with genetic similarity higher than 0.60 should be phenotypically analyzed again in order to clarify the tendency of genetic variability during breeding programs.
Resumo:
The purpose of this academic economic geographical dissertation is to study and describe how competitiveness in the Finnish paper industry has developed during 2001–2008. During these years, the Finnish paper industry has faced economically challenging times. This dissertation attempts to fill the existing gap between theoretical and empirical discussions concerning economic geographical issues in the paper industry. The main research questions are: How have the supply chain costs and margins developed during 2001–2008? How do sales prices, transportation, and fixed and variable costs correlate with gross margins in a spatial context? The research object for this case study is a typical large Finnish paper mill that exports over 90 % of its production. The economic longitudinal research data were obtained from the case mill’s controlled economic system and, correlation (R2) analysis was used as the main research method. The time series data cover monthly economic and manufacturing observations from the mill from 2001 to 2008. The study reveals the development of prices, costs and transportation in the case mill, and it shows how economic variables correlate with the paper mills’ gross margins in various markets in Europe. The research methods of economic geography offer perspectives that pay attention to the spatial (market) heterogeneity. This type of research has been quite scarce in the research tradition of Finnish economic geography and supply chain management. This case study gives new insight into the research tradition of Finnish economic geography and supply chain management and its applications. As a concrete empirical result, this dissertation states that the competitive advantages of the Finnish paper industry were significantly weakened during 2001–2008 by low paper prices, costly manufacturing and expensive transportation. Statistical analysis expose that, in several important markets, transport costs lower gross margins as much as decreasing paper prices, which was a new finding. Paper companies should continuously pay attention to lowering manufacturing and transporting costs to achieve more profitable economic performance. The location of a mill being far from markets clearly has an economic impact on paper manufacturing, as paper demand is decreasing and oversupply is pressuring paper prices down. Therefore, market and economic forecasting in the paper industry is advantageous at the country and product levels while simultaneously taking into account the economic geographically specific dimensions.
Resumo:
Water and fertilizer among the production factors are the elements that most restrict the production of cashew. The precise amount of these factors is essential to the success of the crop yield. This research aimed to determine the best factor-product ratio and analyze technical and economic indicators, of productivity of the cashew clone BRS 189 (Anacardium occidentale) to production factors water and potassium. The experiment was conducted from May 2009 to December 2009 in an experimental area of 56.0 m x 112.0 m in the irrigated Curu - Pentecoste, located in the municipality of Pentecoste, Ceará, Brazil. Production factors water (W) and potassium (K) were the independent variables and productivity (Y), the dependent variable. Ten statistical models that have proven satisfactory for obtaining production function were tested. The marginal rate of substitution was obtained through the ratio of the potassium marginal physical product and the water marginal physical product. The most suited model to the conditions of the experiment was the quadratic polynomial without intercept and interaction. Considering that the price of the water was 0.10 R$ mm -1, the price of the potassium 2.19 R$ kg -1 and the price of the cashew 0.60 R$ kg-1, the amounts of water and K2O to obtain the maximum net income were 6,349.1 L plant-1 of water and 128.7 g plant -1year, -1 respectively. Substituting the values obtained in the production function, the maximum net income was achieved with a yield of 7,496.8 kg ha-1 of cashew.
Resumo:
OBJECTIVE:to identify predictors of death in blunt trauma patients sustaining pelvic fractures and, posteriorly, compare them to a previously reported series from the same center.METHOD: Retrospective analysis of trauma registry data, including blunt trauma patients older than 14 y.o. sustaining pelvic fractures admitted from 2008 to 2010. Patients were assigned into group 1 (dead) or 2 (survivors). We used Student's t, qui square and Fisher's tests for statistical analysis, considering p<0.05 as significant. Posteriorly, we compared predictors of death between both periods.RESULTS: Seventy-nine cases were included. Mean RTS, ISS and TRISS were, respectively, 6.44 + 2.22, 28.0 + 15.2 e 0.74 + 0.33. Nineteen patients died (24,0%). Main cause of death was hemorrhage (42,1%). Group 1 was characterized by (p<0.05) lower systolic blood pressure and Glasgow coma scale means on admission, higher heart rate, head AIS, extremity AIS and ISS means, as well as, higher frequency of severe head injuries and complex pelvic fractures. Comparing both periods, we notice that the anatomic and physiologic severity of injury increased (RTS and ISS means). Furthermore, there was a decrease in the impact of associated thoracic and abdominal injuries on the prognosis and an association of lethality with the presence of complex pelvic fractures.CONCLUSION: There were significant changes in the predictors of death between these two periods. The impact of thoracic and abdominal associated injures decreased while the importance of severe retroperitoneal hemorrhage increased. There was also an increase in trauma severity, which accounted for high lethality.
Resumo:
Energy efficiency is one of the major objectives which should be achieved in order to implement the limited energy resources of the world in a sustainable way. Since radiative heat transfer is the dominant heat transfer mechanism in most of fossil fuel combustion systems, more accurate insight and models may cause improvement in the energy efficiency of the new designed combustion systems. The radiative properties of combustion gases are highly wavelength dependent. Better models for calculating the radiative properties of combustion gases are highly required in the modeling of large scale industrial combustion systems. With detailed knowledge of spectral radiative properties of gases, the modeling of combustion processes in the different applications can be more accurate. In order to propose a new method for effective non gray modeling of radiative heat transfer in combustion systems, different models for the spectral properties of gases including SNBM, EWBM, and WSGGM have been studied in this research. Using this detailed analysis of different approaches, the thesis presents new methods for gray and non gray radiative heat transfer modeling in homogeneous and inhomogeneous H2O–CO2 mixtures at atmospheric pressure. The proposed method is able to support the modeling of a wide range of combustion systems including the oxy-fired combustion scenario. The new methods are based on implementing some pre-obtained correlations for the total emissivity and band absorption coefficient of H2O–CO2 mixtures in different temperatures, gas compositions, and optical path lengths. They can be easily used within any commercial CFD software for radiative heat transfer modeling resulting in more accurate, simple, and fast calculations. The new methods were successfully used in CFD modeling by applying them to industrial scale backpass channel under oxy-fired conditions. The developed approaches are more accurate compared with other methods; moreover, they can provide complete explanation and detailed analysis of the radiation heat transfer in different systems under different combustion conditions. The methods were verified by applying them to some benchmarks, and they showed a good level of accuracy and computational speed compared to other methods. Furthermore, the implementation of the suggested banded approach in CFD software is very easy and straightforward.
Resumo:
Tässä tutkielmassa analysoidaan teoksessa ’Diagnostic and Statistical Manual of Men-tal Disorders, Fifth Edition’ (DSM-5) ja tarkemmin sen kappaleessa ’Gender Dysphoria’ käytettyä kieltä. DSM-5 on mielenterveydenalan ammattilaisille suunnattu luokittelu virallisesti tunnustetuista mielenterveyden häiriöistä ja se sisältää myös oireiden kuvai-luun perustuvat ohjeet näiden mielenterveyshäiriöiden diagnosoimiseksi. ’Gender dys-phoria’ (vapaasti suomennettuna sukupuolidysforia) on lääketieteellinen termi, joka viittaa biologisesta sukupuolesta eriävän sukupuoli-identiteetin aiheuttamaan henki-seen pahoinvointiin. Sukupuoli-identiteetin ja mielenterveysongelman yhdistäminen sisältää ideologisiin arvoihin pohjautuvia perusteluja ja tässä tutkielmassa analysoi-daan ’Gender Dysphoria’ – kappaleen ideologista sisältöä kriittisestä näkökulmasta. Tutkimuksen tarkastellaan ’Gender Dysphoria’ – kappaleessa käytetyn kielen ideologi-sia heijastumia ja niiden sosiaalisia vaikutuksia kolmen tutkimuskysymyksen avulla: 1) Kuinka diagnosoitu henkilö esitetään tekstissä? 2) Kuinka tekstissä rakennetaan kuvaa sukupuolidysforiasta mielenterveyshäiriönä? 3) Miten analyysin tulokset saattavat vai-kuttaa käsitykseen sukupuolen yhteydestä mielenterveyteen ja sukupuolidysforian diagnosoimiseen. Analyysissä käytetään metodina M. A. K. Hallidayn transitiivisuusteo-riaa ja tulosten sosiaalisia vaikutuksia analysoidaan Norman Faircloughn diskurssiana-lyysimallin avulla. Transitiivisuusanalyysin avulla tarkastellaan kirjoittajien tekemiä va-lintoja kielenkäytön suhteen, jotka Hallidayn teorian mukaan heijastavat kirjoittajien henkilökohtaisia kokemuksia ympäröivästä maailmasta. Tutkimus paljasti, että sukupuolidysforia esitetään mielenterveysongelmana erotta-malla se yksilöstä erilliseksi toimijaksi, joka suorittaa erilaisia prosesseja yksilön sisällä. Yksilöistä erityisesti lapset esitetään tekstissä voimakkaasti perinteisiin sukupuoliroo-leihin pohjautuvan ideologian valossa, joka heijastuu oireiden kuvailuun. Analyysi osoittaa myös logiikkaongelmia lasten oireiden kuvailussa, jotka johtavat ristiriitoihin oireiden ja mielenterveysongelman yhteydessä ja kumoavat perusteet, joiden pohjalta lapset diagnosoidaan. Tutkimuksen lopussa ehdotetaan, että sukupuolidysforiaan liit-tyvien diagnoosiohjeiden ja – kriteerien perusteita muokataan yleisesti sukupuoli-identiteetin itsemääräämisoikeuteen pohjautuvaksi ja lasten osalta tekstiin sisällytet-täisiin mahdollisia tieteellisiä perusteluja, jotka kumoaisivat diagnoosiohjeiden nyky-muodossaan sisältämät ristiriidat ja perustelisivat lasten diagnosoinnin oikeellisuuden
Resumo:
The surgical specimens from 51 men submitted to radical prostatectomy for localized prostate cancer were examined by immunohistochemistry using proliferation cell nuclear antigen (PCNA) monoclonal antibody to evaluate the proliferative index (PI). The relationship between PI, biological variables and p53 protein expression was evaluated by immunohistochemistry. PI was low in invasive localized prostate carcinoma (mean, 12.4%) and the incidence of PCNA-positive cells was significantly higher in tumors with p53 expression (P = 0.0226). There was no statistical difference in PCNA values when biological parameters such as Gleason score, tumor volume, extraprostatic involvement, seminal vesicle infiltration or lymph node metastasis were considered. We conclude that proliferative activity is usually low in prostate carcinoma but is correlated with p53 immune staining, indicating that p53 is important in cell cycle control in this neoplasm.