932 resultados para methods of analysis
Resumo:
Credit risk assessment is an integral part of banking. Credit risk means that the return will not materialise in case the customer fails to fulfil its obligations. Thus a key component of banking is setting acceptance criteria for granting loans. Theoretical part of the study focuses on key components of credit assessment methods of Banks in the literature when extending credits to large corporations. Main component is Basel II Accord, which sets regulatory requirement for credit risk assessment methods of banks. Empirical part comprises, as primary source, analysis of major Nordic banks’ annual reports and risk management reports. As secondary source complimentary interviews were carried out with senior credit risk assessment personnel. The findings indicate that all major Nordic banks are using combination of quantitative and qualitative information in credit risk assessment model when extending credits to large corporations. The relative input of qualitative information depends on the selected approach to the credit rating, i.e. point-in-time or through-the-cycle.
Resumo:
One approach to verify the adequacy of estimation methods of reference evapotranspiration is the comparison with the Penman-Monteith method, recommended by the United Nations of Food and Agriculture Organization - FAO, as the standard method for estimating ET0. This study aimed to compare methods for estimating ET0, Makkink (MK), Hargreaves (HG) and Solar Radiation (RS), with Penman-Monteith (PM). For this purpose, we used daily data of global solar radiation, air temperature, relative humidity and wind speed for the year 2010, obtained through the automatic meteorological station, with latitude 18° 91' 66" S, longitude 48° 25' 05" W and altitude of 869m, at the National Institute of Meteorology situated in the Campus of Federal University of Uberlandia - MG, Brazil. Analysis of results for the period were carried out in daily basis, using regression analysis and considering the linear model y = ax, where the dependent variable was the method of Penman-Monteith and the independent, the estimation of ET0 by evaluated methods. Methodology was used to check the influence of standard deviation of daily ET0 in comparison of methods. The evaluation indicated that methods of Solar Radiation and Penman-Monteith cannot be compared, yet the method of Hargreaves indicates the most efficient adjustment to estimate ETo.
Resumo:
Mobility of atrazine in soil has contributed to the detection of levels above the legal limit in surface water and groundwater in Europe and the United States. The use of new formulations can reduce or minimize the impacts caused by the intensive use of this herbicide in Brazil, mainly in regions with higher agricultural intensification. The objective of this study was to compare the leaching of a commercial formulation of atrazine (WG) with a controlled release formulation (xerogel) using bioassay and chromatographic methods of analysis. The experiment was a split plot randomized block design with four replications, in a (2 x 6) + 1 arrangement. The main formulations of atrazine (WG and xerogel) were allocated in the plots, and the herbicide concentrations (0, 3200, 3600, 4200, 5400 and 8000 g ha-1), in the subplots. Leaching was determined comparatively by using bioassays with oat and chromatographic analysis. The results showed a greater concentration of the herbicide in the topsoil (0-4 cm) in the treatment with the xerogel formulation in comparison with the commercial formulation, which contradicts the results obtained with bioassays, probably because the amount of herbicide available for uptake by plants in the xerogel formulation is less than that available in the WG formulation.
Resumo:
Decaffeinated coffee accounts for 10 percent of coffee sales in the world; it is preferred by consumers that do not wish or are sensitive to caffeine effects. This article presents an analytical comparison of capillary electrophoresis (CE) and high performance liquid chromatography (HPLC) methods for residual caffeine quantification in decaffeinated coffee in terms of validation parameters, costs, analysis time, composition and treatment of the residues generated, and caffeine quantification in 20 commercial samples. Both methods showed suitable validation parameters. Caffeine content did not differ statistically in the two different methods of analysis. The main advantage of the high performance liquid chromatography (HPLC) method was the 42-fold lower detection limit. Nevertheless, the capillary electrophoresis (CE) detection limit was 115-fold lower than the allowable limit by the Brazilian law. The capillary electrophoresis (CE) analyses were 30% faster, the reagent costs were 76.5-fold, and the volume of the residues generated was 33-fold lower. Therefore, the capillary electrophoresis (CE) method proved to be a valuable analytical tool for this type of analysis.
Resumo:
Seed dormancy is a frequent phenomenon in tropical species, causing slow and non-uniform germination. To overcome this, treatments such as scarification on abrasive surface and hot water are efficient. The objective of this study was to quantify seed germination with no treatment (Experiment 1) and identify an efficient method of breaking dormancy in Schizolobium amazonicum Huber ex Ducke seeds (Experiment 2). The effects of manual scarification on electric emery, water at 80ºC and 100ºC and manual scarification on wood sandpaper were studied. Seeds were sown either immediately after scarification or after immersion in water for 24h in a sand and sawdust mixture. Germination and hard seed percentages and germination speed were recorded and analyzed in a completely randomized design. Analysis of germination was carried out at six, nine, 12, 15, 18, 21 and 24 days after sowing as a 4x2 factorial design and through regression analysis. Treatment means of the remaining variables were compared by the Tukey test. Seed germination with no treatment started on the 7th day after sowing and reached 90% on the 2310th day (Experiment 1). Significant interaction between treatments to overcome dormancy and time of immersion in water was observed (Experiment 2). In general, immersion in water increased the germination in most evaluations. The regression analyses were significant for all treatments with exception of the control treatment and immersion in water at 80ºC. Germination speed was higher when seeds were scarified on an abrasive surface (emery and sandpaper) and, in these treatments, the germination ranged from 87% to 96%, with no hard seeds. S. amazonicum seeds coats are impermeable to water, which hinders quick and uniform germination. Scarification on electric emery followed by immediate sowing, scarification on sandpaper followed by immediate sowing and sowing after 24h were the most efficient treatments for overcoming dormancy in S. amazonicum seeds.
Resumo:
Adjustement is an ongoing process by which factors of reallocated to equalize their returns in different uses. Adjustment occurs though market mechanisms or intrafirm reallocation of resources as a result of changes in terms of trade, government policies, resource availability, technological change, etc. These changes alter production opportunities and production, transaction and information costs, and consequently modify production functions, organizational design, etc. In this paper we define adjustment (section 2); review empirical estimates of the extent of adjustment in Canada and abroad (section 3); review selected features of the trade policy and adjustment context of relevance for policy formulation among which: slow growth, a shift to services, a shift to the Pacific Rim, the internationalization of production, investment distribution communications the growing use of NTB's, changes in foreign direct investment patterns, intrafirm and intraindustry trade, interregional trade flows, differences in micro economic adjustment processes of adjustment as between subsidiaries and Canadian companies (section 4); examine methodologies and results of studies of the impact of trade liberalization on jobs (section 5); and review the R. Harris general equilibrium model (section 6). Our conclusion emphasizes the importance of harmonizing commercial and domestic policies dealing with adjustment (section 7). We close with a bibliography of relevant publications.
Resumo:
Medical fields requires fast, simple and noninvasive methods of diagnostic techniques. Several methods are available and possible because of the growth of technology that provides the necessary means of collecting and processing signals. The present thesis details the work done in the field of voice signals. New methods of analysis have been developed to understand the complexity of voice signals, such as nonlinear dynamics aiming at the exploration of voice signals dynamic nature. The purpose of this thesis is to characterize complexities of pathological voice from healthy signals and to differentiate stuttering signals from healthy signals. Efficiency of various acoustic as well as non linear time series methods are analysed. Three groups of samples are used, one from healthy individuals, subjects with vocal pathologies and stuttering subjects. Individual vowels/ and a continuous speech data for the utterance of the sentence "iruvarum changatimaranu" the meaning in English is "Both are good friends" from Malayalam language are recorded using a microphone . The recorded audio are converted to digital signals and are subjected to analysis.Acoustic perturbation methods like fundamental frequency (FO), jitter, shimmer, Zero Crossing Rate(ZCR) were carried out and non linear measures like maximum lyapunov exponent(Lamda max), correlation dimension (D2), Kolmogorov exponent(K2), and a new measure of entropy viz., Permutation entropy (PE) are evaluated for all three groups of the subjects. Permutation Entropy is a nonlinear complexity measure which can efficiently distinguish regular and complex nature of any signal and extract information about the change in dynamics of the process by indicating sudden change in its value. The results shows that nonlinear dynamical methods seem to be a suitable technique for voice signal analysis, due to the chaotic component of the human voice. Permutation entropy is well suited due to its sensitivity to uncertainties, since the pathologies are characterized by an increase in the signal complexity and unpredictability. Pathological groups have higher entropy values compared to the normal group. The stuttering signals have lower entropy values compared to the normal signals.PE is effective in charaterising the level of improvement after two weeks of speech therapy in the case of stuttering subjects. PE is also effective in characterizing the dynamical difference between healthy and pathological subjects. This suggests that PE can improve and complement the recent voice analysis methods available for clinicians. The work establishes the application of the simple, inexpensive and fast algorithm of PE for diagnosis in vocal disorders and stuttering subjects.
Resumo:
Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.
Resumo:
Risk and uncertainty are, to say the least, poorly considered by most individuals involved in real estate analysis - in both development and investment appraisal. Surveyors continue to express 'uncertainty' about the value (risk) of using relatively objective methods of analysis to account for these factors. These methods attempt to identify the risk elements more explicitly. Conventionally this is done by deriving probability distributions for the uncontrolled variables in the system. A suggested 'new' way of "being able to express our uncertainty or slight vagueness about some of the qualitative judgements and not entirely certain data required in the course of the problem..." uses the application of fuzzy logic. This paper discusses and demonstrates the terminology and methodology of fuzzy analysis. In particular it attempts a comparison of the procedures with those used in 'conventional' risk analysis approaches and critically investigates whether a fuzzy approach offers an alternative to the use of probability based analysis for dealing with aspects of risk and uncertainty in real estate analysis
Resumo:
Alzheimer`s Disease (AD) is the most common type of dementia among the elderly, with devastating consequences for the patient, their relatives, and caregivers. More than 300 genetic polymorphisms have been involved with AD, demonstrating that this condition is polygenic and with a complex pattern of inheritance. This paper aims to report and compare the results of AD genetics studies in case-control and familial analysis performed in Brazil since our first publication, 10 years ago. They include the following genes/markers: Apolipoprotein E (APOE), 5-hidroxytryptamine transporter length polymorphic region (5-HTTLPR), brain-derived neurotrophin factor (BDNF), monoamine oxidase A (MAO-A), and two simple-sequence tandem repeat polymorphisms (DXS1047 and D10S1423). Previously unpublished data of the interleukin-1 alpha (IL-1 alpha) and interleukin-1 beta (IL-1 beta) genes are reported here briefly. Results from others Brazilian studies with AD patients are also reported at this short review. Four local families studied with various markers at the chromosome 21, 19, 14, and 1 are briefly reported for the first time. The importance of studying DNA samples from Brazil is highlighted because of the uniqueness of its population, which presents both intense ethnical miscegenation, mainly at the east coast, but also clusters with high inbreeding rates in rural areas at the countryside. We discuss the current stage of extending these studies using high-throughput methods of large-scale genotyping, such as single nucleotide polymorphism microarrays, associated with bioinformatics tools that allow the analysis of such extensive number of genetics variables, with different levels of penetrance. There is still a long way between the huge amount of data gathered so far and the actual application toward the full understanding of AD, but the final goal is to develop precise tools for diagnosis and prognosis, creating new strategies for better treatments based on genetic profile.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Fluconazole, alpha-(2.4-diflurofenil)-alpha-(1H-triazol-1-methyl)-1H-1,2,4-triazol-1-ethanol, is an antifungal of the triazoles class. It shows activity against species of Candida sp. and it is indicated in cases of oropharyngeal candidiasis, esophageal, vaginal, and deep infection. Fluconazole is a selective inhibitor of ergosterol, a steroid exclusive of the cell membrane of fungal cells. Fluconazole is highly absorbed by the gastrointestinal tract and spreads easily by body fluids. The main adverse reactions related to the use of fluconazole are nausea, vomiting, headache, rash, abdominal pain, diarrhea, and alopecia in patients undergoing prolonged treatment with a dose of 400 mg/day. In the form of raw material, pharmaceutical formulations, or biological material, fluconazole can be determined by methods such as titration, spectrophotometry, and thin-layer, gas, and liquid chromatography. This article discusses the pharmacological and physicochemical properties of fluconazole and also the methods of analysis applied to the determination of the drug.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)