887 resultados para methods of analysis
Resumo:
This document contains analytical methods that detail the procedures for determining major and trace element concentrations in bivalve tissue and sediment samples collected as part of the National Status and Trends Program (NS&T) for the years 2000-2006. Previously published NOAA Technical Memoranda NOS ORCA 71 and 130 (Lauenstein and Cantillo, 1993; Lauenstein and Cantillo, 1998) detail trace element analyses for the years 1984-1992 and 1993-1996, respectively, and include ancillary, histopathology, and contaminant (organic and trace element) analytical methods. The methods presented in this document for trace element analysis were utilized by the NS&T Mussel Watch and Bioeffects Projects. The Mussel Watch Project has been monitoring contaminants in bivalves and sediment for over 20 years, and is the longest active contaminant monitoring program operating in U.S. costal waters. Approximately 280 Mussel Watch sites are monitored on biennial and decadal timescales using bivalve tissue and sediment, respectively. The Bioeffects Project applies the sediment quality approach, which uses sediment contamination measurements, toxicity tests and benthic macroinfauna quantification to characterize pollution in selected estuaries and coastal embayments. Contaminant assessment is a core function of both projects. Although only one contract laboratory was used by the NS&T Program during the specified time period, several analytical methods and instruments were employed. The specific analytical method, including instrumentation and detection limit, is noted for each measurement taken and can be found at http://NSandT.noaa.gov. The major and trace elements measured by the NS&T Program include: Al, Si, Cr, Mn, Fe, Ni, Cu, Zn, As, Se, Sn, Sb, Ag, Cd, Hg, Tl and Pb.
Resumo:
Increases in food production and the ever-present threat of food contamination from microbiological and chemical sources have led the food industry and regulators to pursue rapid, inexpensive methods of analysis to safeguard the health and safety of the consumer. Although sophisticated techniques such as chromatography and spectrometry provide more accurate and conclusive results, screening tests allow a much higher throughput of samples at a lower cost and with less operator training, so larger numbers of samples can be analysed. Biosensors combine a biological recognition element (enzyme, antibody, receptor) with a transducer to produce a measurable signal proportional to the extent of interaction between the recognition element and the analyte. The different uses of the biosensing instrumentation available today are extremely varied, with food analysis as an emerging and growing application. The advantages offered by biosensors over other screening methods such as radioimmunoassay, enzyme-linked immunosorbent assay, fluorescence immunoassay and luminescence immunoassay, with respect to food analysis, include automation, improved reproducibility, speed of analysis and real-time analysis. This article will provide a brief footing in history before reviewing the latest developments in biosensor applications for analysis of food contaminants (January 2007 to December 2010), focusing on the detection of pathogens, toxins, pesticides and veterinary drug residues by biosensors, with emphasis on articles showing data in food matrices. The main areas of development common to these groups of contaminants include multiplexing, the ability to simultaneously analyse a sample for more than one contaminant and portability. Biosensors currently have an important role in food safety; further advances in the technology, reagents and sample handling will surely reinforce this position.
Resumo:
The construction industry in Northern Ireland is one of the major contributors of construction waste to landfill each year. The aim of this research paper is to identify the core on-site management causes of material waste on construction sites in Northern Ireland and to illustrate various methods of prevention which can be adopted. The research begins with a detailed literature review and is complemented with the conduction of semi-structured interviews with 6 professionals who are experienced and active within the Northern Ireland construction industry. Following on from the literature review and interviews analysis, a questionnaire survey is developed to obtain further information in relation to the subject area. The questionnaire is based on the key findings of the previous stages to direct the research towards the most influential factors. The analysis of the survey responses reveals that the core causes of waste generation include a rushed program, poor handling and on-site damage of materials, while the principal methods of prevention emerge as the adequate storage, the reuse of material on-site and efficient material ordering. Furthermore, the role of the professional background in the shaping of perceptions relevant to waste management is also investigated and significant differences are identified. The findings of this research are beneficial for the industry as they enhance the understanding of construction waste generation causes and highlight the practices required to reduce waste on-site in the context of sustainable development.
Resumo:
Dissertação de Mestrado, Gestão da Água e da Costa, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2010
Resumo:
Dissertação de mestrado, Qualidade em Análises, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2015
Resumo:
Two of the most frequently used methods of pollen counting on slides from Hirst type traps are evaluated in this paper: the transverse traverse method and the longitudinal traverse method. The study was carried out during June–July 1996 and 1997 on slides from a trap at Worcester, UK. Three pollen types were selected for this purpose: Poaceae, Urticaceae and Quercus. The statistical results show that the daily concentrations followed similar trends (p < 0.01, R-values between 0.78–0.96) with both methods during the two years, although the counts were slightly higher using the longitudinal traverses method. Significant differences were observed, however, when the distribution of the concentrations during 24 hour sampling periods was considered. For more detailed analysis, the daily counts obtained with both methods were correlated with the total number of pollen grains for the taxon over the whole slide, in two different situations: high and low concentrations of pollen in the atmosphere. In the case of high concentrations, the counts for all three taxa with both methods are significantly correlated with the total pollen count. In the samples with low concentrations, the Poaceae and Urticaceae counts with both methods are significantly correlated with the total counts, but none of Quercus counts are. Consideration of the results indicates that both methods give a reasonable approximation to the count derived from the slide as a whole. More studies need be done to explore the comparability of counting methods in order to work towards a Universal Methodology in Aeropalynology.
Resumo:
Seismic data is difficult to analyze and classical mathematical tools reveal strong limitations in exposing hidden relationships between earthquakes. In this paper, we study earthquake phenomena in the perspective of complex systems. Global seismic data, covering the period from 1962 up to 2011 is analyzed. The events, characterized by their magnitude, geographic location and time of occurrence, are divided into groups, either according to the Flinn-Engdahl (F-E) seismic regions of Earth or using a rectangular grid based in latitude and longitude coordinates. Two methods of analysis are considered and compared in this study. In a first method, the distributions of magnitudes are approximated by Gutenberg-Richter (G-R) distributions and the parameters used to reveal the relationships among regions. In the second method, the mutual information is calculated and adopted as a measure of similarity between regions. In both cases, using clustering analysis, visualization maps are generated, providing an intuitive and useful representation of the complex relationships that are present among seismic data. Such relationships might not be perceived on classical geographic maps. Therefore, the generated charts are a valid alternative to other visualization tools, for understanding the global behavior of earthquakes.
Resumo:
Adjustement is an ongoing process by which factors of reallocated to equalize their returns in different uses. Adjustment occurs though market mechanisms or intrafirm reallocation of resources as a result of changes in terms of trade, government policies, resource availability, technological change, etc. These changes alter production opportunities and production, transaction and information costs, and consequently modify production functions, organizational design, etc. In this paper we define adjustment (section 2); review empirical estimates of the extent of adjustment in Canada and abroad (section 3); review selected features of the trade policy and adjustment context of relevance for policy formulation among which: slow growth, a shift to services, a shift to the Pacific Rim, the internationalization of production, investment distribution communications the growing use of NTB's, changes in foreign direct investment patterns, intrafirm and intraindustry trade, interregional trade flows, differences in micro economic adjustment processes of adjustment as between subsidiaries and Canadian companies (section 4); examine methodologies and results of studies of the impact of trade liberalization on jobs (section 5); and review the R. Harris general equilibrium model (section 6). Our conclusion emphasizes the importance of harmonizing commercial and domestic policies dealing with adjustment (section 7). We close with a bibliography of relevant publications.
Resumo:
Medical fields requires fast, simple and noninvasive methods of diagnostic techniques. Several methods are available and possible because of the growth of technology that provides the necessary means of collecting and processing signals. The present thesis details the work done in the field of voice signals. New methods of analysis have been developed to understand the complexity of voice signals, such as nonlinear dynamics aiming at the exploration of voice signals dynamic nature. The purpose of this thesis is to characterize complexities of pathological voice from healthy signals and to differentiate stuttering signals from healthy signals. Efficiency of various acoustic as well as non linear time series methods are analysed. Three groups of samples are used, one from healthy individuals, subjects with vocal pathologies and stuttering subjects. Individual vowels/ and a continuous speech data for the utterance of the sentence "iruvarum changatimaranu" the meaning in English is "Both are good friends" from Malayalam language are recorded using a microphone . The recorded audio are converted to digital signals and are subjected to analysis.Acoustic perturbation methods like fundamental frequency (FO), jitter, shimmer, Zero Crossing Rate(ZCR) were carried out and non linear measures like maximum lyapunov exponent(Lamda max), correlation dimension (D2), Kolmogorov exponent(K2), and a new measure of entropy viz., Permutation entropy (PE) are evaluated for all three groups of the subjects. Permutation Entropy is a nonlinear complexity measure which can efficiently distinguish regular and complex nature of any signal and extract information about the change in dynamics of the process by indicating sudden change in its value. The results shows that nonlinear dynamical methods seem to be a suitable technique for voice signal analysis, due to the chaotic component of the human voice. Permutation entropy is well suited due to its sensitivity to uncertainties, since the pathologies are characterized by an increase in the signal complexity and unpredictability. Pathological groups have higher entropy values compared to the normal group. The stuttering signals have lower entropy values compared to the normal signals.PE is effective in charaterising the level of improvement after two weeks of speech therapy in the case of stuttering subjects. PE is also effective in characterizing the dynamical difference between healthy and pathological subjects. This suggests that PE can improve and complement the recent voice analysis methods available for clinicians. The work establishes the application of the simple, inexpensive and fast algorithm of PE for diagnosis in vocal disorders and stuttering subjects.
Resumo:
Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.
Resumo:
Risk and uncertainty are, to say the least, poorly considered by most individuals involved in real estate analysis - in both development and investment appraisal. Surveyors continue to express 'uncertainty' about the value (risk) of using relatively objective methods of analysis to account for these factors. These methods attempt to identify the risk elements more explicitly. Conventionally this is done by deriving probability distributions for the uncontrolled variables in the system. A suggested 'new' way of "being able to express our uncertainty or slight vagueness about some of the qualitative judgements and not entirely certain data required in the course of the problem..." uses the application of fuzzy logic. This paper discusses and demonstrates the terminology and methodology of fuzzy analysis. In particular it attempts a comparison of the procedures with those used in 'conventional' risk analysis approaches and critically investigates whether a fuzzy approach offers an alternative to the use of probability based analysis for dealing with aspects of risk and uncertainty in real estate analysis
Resumo:
Alzheimer`s Disease (AD) is the most common type of dementia among the elderly, with devastating consequences for the patient, their relatives, and caregivers. More than 300 genetic polymorphisms have been involved with AD, demonstrating that this condition is polygenic and with a complex pattern of inheritance. This paper aims to report and compare the results of AD genetics studies in case-control and familial analysis performed in Brazil since our first publication, 10 years ago. They include the following genes/markers: Apolipoprotein E (APOE), 5-hidroxytryptamine transporter length polymorphic region (5-HTTLPR), brain-derived neurotrophin factor (BDNF), monoamine oxidase A (MAO-A), and two simple-sequence tandem repeat polymorphisms (DXS1047 and D10S1423). Previously unpublished data of the interleukin-1 alpha (IL-1 alpha) and interleukin-1 beta (IL-1 beta) genes are reported here briefly. Results from others Brazilian studies with AD patients are also reported at this short review. Four local families studied with various markers at the chromosome 21, 19, 14, and 1 are briefly reported for the first time. The importance of studying DNA samples from Brazil is highlighted because of the uniqueness of its population, which presents both intense ethnical miscegenation, mainly at the east coast, but also clusters with high inbreeding rates in rural areas at the countryside. We discuss the current stage of extending these studies using high-throughput methods of large-scale genotyping, such as single nucleotide polymorphism microarrays, associated with bioinformatics tools that allow the analysis of such extensive number of genetics variables, with different levels of penetrance. There is still a long way between the huge amount of data gathered so far and the actual application toward the full understanding of AD, but the final goal is to develop precise tools for diagnosis and prognosis, creating new strategies for better treatments based on genetic profile.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Fluconazole, alpha-(2.4-diflurofenil)-alpha-(1H-triazol-1-methyl)-1H-1,2,4-triazol-1-ethanol, is an antifungal of the triazoles class. It shows activity against species of Candida sp. and it is indicated in cases of oropharyngeal candidiasis, esophageal, vaginal, and deep infection. Fluconazole is a selective inhibitor of ergosterol, a steroid exclusive of the cell membrane of fungal cells. Fluconazole is highly absorbed by the gastrointestinal tract and spreads easily by body fluids. The main adverse reactions related to the use of fluconazole are nausea, vomiting, headache, rash, abdominal pain, diarrhea, and alopecia in patients undergoing prolonged treatment with a dose of 400 mg/day. In the form of raw material, pharmaceutical formulations, or biological material, fluconazole can be determined by methods such as titration, spectrophotometry, and thin-layer, gas, and liquid chromatography. This article discusses the pharmacological and physicochemical properties of fluconazole and also the methods of analysis applied to the determination of the drug.