935 resultados para Methods of Recognition
Resumo:
Low quality mine drainage from tailings facilities persists as one of the most significant global environmental concerns related to sulphide mining. Due to the large variation in geological and environmental conditions at mine sites, universal approaches to the management of mine drainage are not always applicable. Instead, site-specific knowledge of the geochemical behaviour of waste materials is required for the design and closure of the facilities. In this thesis, tailings-derived water contamination and factors causing the pollution were investigated in two coeval active sulphide mine sites in Finland: the Hitura Ni mine and the Luikonlahti Cu-Zn-Co-Ni mine and talc processing plant. A hydrogeochemical study was performed to characterise the tailingsderived water pollution at Hitura. Geochemical changes in the Hitura tailings were evaluated with a detailed mineralogical and geochemical investigation (solid-phase speciation, acid mine drainage potential, pore water chemistry) and using a spatial assessment to identify the mechanisms of water contamination. A similar spatial investigation, applying selective extractions, was carried out in the Luikonlahti tailings area for comparative purposes (Hitura low-sulphide tailings vs. Luikonlahti sulphide-rich tailings). At both sites, hydrogeochemistry of tailings seepage waters was further characterised to examine the net results of the processes observed within the impoundments and to identify constraints for water treatment. At Luikonlahti, annual and seasonal variation in effluent quality was evaluated based on a four-year monitoring period. Observations pertinent to future assessment and mine drainage prevention from existing and future tailings facilities were presented based on the results. A combination of hydrogeochemical approaches provided a means to delineate the tailings-derived neutral mine drainage at Hitura. Tailings effluents with elevated Ni, SO4 2- and Fe content had dispersed to the surrounding aquifer through a levelled-out esker and underneath the seepage collection ditches. In future mines, this could be avoided with additional basal liners in tailings impoundments where the permeability of the underlying Quaternary deposits is inadequate, and with sufficiently deep ditches. Based on the studies, extensive sulphide oxidation with subsequent metal release may already initiate during active tailings disposal. The intensity and onset of oxidation depended on e.g. the Fe sulphide content of the tailings, water saturation level, and time of exposure of fresh sulphide grains. Continuous disposal decreased sulphide weathering in the surface of low-sulphide tailings, but oxidation initiated if they were left uncovered after disposal ceased. In the sulphide-rich tailings, delayed burial of the unsaturated tailings had resulted in thick oxidized layers, despite the continuous operation. Sulphide weathering and contaminant release occurred also in the border zones. Based on the results, the prevention of sulphide oxidation should already be considered in the planning of tailings disposal, taking into account the border zones. Moreover, even lowsulphide tailings should be covered without delay after active disposal ceases. The quality of tailings effluents showed wide variation within a single impoundment and between the two different types of tailings facilities assessed. The affecting factors included source materials, the intensity of weathering of tailings and embankment materials along the seepage flow path, inputs from the process waters, the water retention time in tailings, and climatic seasonality. In addition, modifications to the tailings impoundment may markedly change the effluent quality. The wide variation in the tailings effluent quality poses challenges for treatment design. The final decision on water management requires quantification of the spatial and seasonal fluctuation at the site, taking into account changes resulting from the eventual closure of the impoundment. Overall, comprehensive hydrogeochemical mapping was deemed essential in the identification of critical contaminants and their sources at mine sites. Mineralogical analysis, selective extractions, and pore water analysis were a good combination of methods for studying the weathering of tailings and in evaluating metal mobility from the facilities. Selective extractions with visual observations and pH measurements of tailings solids were, nevertheless, adequate in describing the spatial distribution of sulphide oxidation in tailings impoundments. Seepage water chemistry provided additional data on geochemical processes in tailings and was necessary for defining constraints for water treatment.
Resumo:
This thesis examines and explains the procedure used to redesign the attachment of permanent magnets to the surface of the rotor of a synchronous generator. The methodology followed to go from the actual assembly to converge to the final purposed innovation was based on the systematic approach design. This meant that first a series of steps had to be predefined as a frame of reference later to be used to compare and select proposals, and finally to obtain the innovation that was sought. Firstly, a series of patents was used as the background for the upcoming ideas. To this end, several different patented assemblies had been found and categorized according the main element onto which this thesis if focused, meaning the attachment element or method. After establishing the technological frame of reference, a brainstorm was performed to obtain as many ideas as possible. Then these ideas were classified, regardless of their degree of complexity or usability, since at this time the quantity of the ideas was the important issue. Subsequently, they were compared and evaluated from different points of view. The comparison and evaluation in this case was based on the use of a requirement list, which established the main needs that the design had to fulfill. Then the selection could be done by grading each idea in accordance with these requirements. In this way, one was able to obtain the idea or ideas that best fulfilled these requirements. Once all of the ideas were compared and evaluated, the best or most suitable idea or ideas were separated. Finally, the selected idea or ideas was/were analyzed in extension and a number of improvements were made. Consequently, a final idea was refined and made more suitable at its performance, manufacture, and life cycle assessment. Therefore, in the end, the design process gave a solution to the problem pointed out at the beginning.
Resumo:
The nutrient load to the Gulf of Finland has started to increase as a result of the strong economic recovery in agriculture and livestock farming in the Leningrad region. Also sludge produced from municipal wastewater treatment plant of the Leningrad region causes the great impact on the environment, but still the main options for its treatment is disposal on the sludge beds or Landfills. The aim of this study was to evaluate the implementation of possible joint treatment methods of manure form livestock and poultry enterprises and sewage sludge produced from municipal wastewater treatment plants in the Leningrad region. The study is based on published data. The most attention was put on the anaerobic digestion and incineration methods. The manure and sewage sludge generation for the whole Leningrad region and energy potential produced from their treatment were estimated. The calculations showed that total amount of sewage sludge generation is 1 348 000 t/a calculated on wet matter and manure generation is 3 445 000 t/a calculated on wet matter. The potential heat release from anaerobic digestion process and incineration process is 4 880 000 GJ/a and 5 950 000 GJ/a, respectively. Furthermore, the work gives the overview of the general Russian and Finnish legislation concerning manure and sewage sludge treatment. In the Gatchina district it was chosen the WWTP and livestock and poultry enterprises for evaluation of the centralized treatment plant implementation based on anaerobic digestion and incineration methods. The electricity and heat power of plant based on biogas combustion process is 4.3 MW and 7.8 MW, respectively. The electricity and heat power of plant based on manure and sewage sludge incineration process is 3.0 MW and 6.1 MW, respectively.
Resumo:
Credit risk assessment is an integral part of banking. Credit risk means that the return will not materialise in case the customer fails to fulfil its obligations. Thus a key component of banking is setting acceptance criteria for granting loans. Theoretical part of the study focuses on key components of credit assessment methods of Banks in the literature when extending credits to large corporations. Main component is Basel II Accord, which sets regulatory requirement for credit risk assessment methods of banks. Empirical part comprises, as primary source, analysis of major Nordic banks’ annual reports and risk management reports. As secondary source complimentary interviews were carried out with senior credit risk assessment personnel. The findings indicate that all major Nordic banks are using combination of quantitative and qualitative information in credit risk assessment model when extending credits to large corporations. The relative input of qualitative information depends on the selected approach to the credit rating, i.e. point-in-time or through-the-cycle.
Resumo:
One approach to verify the adequacy of estimation methods of reference evapotranspiration is the comparison with the Penman-Monteith method, recommended by the United Nations of Food and Agriculture Organization - FAO, as the standard method for estimating ET0. This study aimed to compare methods for estimating ET0, Makkink (MK), Hargreaves (HG) and Solar Radiation (RS), with Penman-Monteith (PM). For this purpose, we used daily data of global solar radiation, air temperature, relative humidity and wind speed for the year 2010, obtained through the automatic meteorological station, with latitude 18° 91' 66" S, longitude 48° 25' 05" W and altitude of 869m, at the National Institute of Meteorology situated in the Campus of Federal University of Uberlandia - MG, Brazil. Analysis of results for the period were carried out in daily basis, using regression analysis and considering the linear model y = ax, where the dependent variable was the method of Penman-Monteith and the independent, the estimation of ET0 by evaluated methods. Methodology was used to check the influence of standard deviation of daily ET0 in comparison of methods. The evaluation indicated that methods of Solar Radiation and Penman-Monteith cannot be compared, yet the method of Hargreaves indicates the most efficient adjustment to estimate ETo.
Resumo:
The aim of this study was to compare two methods of tear sampling for protein quantification. Tear samples were collected from 29 healthy dogs (58 eyes) using Schirmer tear test (STT) strip and microcapillary tubes. The samples were frozen at -80ºC and analyzed by the Bradford method. Results were analyzed by Student's t test. The average protein concentration and standard deviation from tears collected with microcapillary tube were 4.45mg/mL ±0.35 and 4,52mg/mL ±0.29 for right and left eyes respectively. The average protein concentration and standard deviation from tears collected with Schirmer Tear Test (STT) strip were and 54.5mg/mL ±0.63 and 54.15mg/mL ±0.65 to right and left eyes respectively. Statistically significant differences (p<0.001) were found between the methods. In the conditions in which this study was conducted, the average protein concentration obtained with the Bradford test from tear samples obtained by Schirmer Tear Test (STT) strip showed values higher than those obtained with microcapillary tube. It is important that concentration of tear protein pattern values should be analyzed according the method used to collect tear samples.
Resumo:
Protein engineering aims to improve the properties of enzymes and affinity reagents by genetic changes. Typical engineered properties are affinity, specificity, stability, expression, and solubility. Because proteins are complex biomolecules, the effects of specific genetic changes are seldom predictable. Consequently, a popular strategy in protein engineering is to create a library of genetic variants of the target molecule, and render the population in a selection process to sort the variants by the desired property. This technique, called directed evolution, is a central tool for trimming protein-based products used in a wide range of applications from laundry detergents to anti-cancer drugs. New methods are continuously needed to generate larger gene repertoires and compatible selection platforms to shorten the development timeline for new biochemicals. In the first study of this thesis, primer extension mutagenesis was revisited to establish higher quality gene variant libraries in Escherichia coli cells. In the second study, recombination was explored as a method to expand the number of screenable enzyme variants. A selection platform was developed to improve antigen binding fragment (Fab) display on filamentous phages in the third article and, in the fourth study, novel design concepts were tested by two differentially randomized recombinant antibody libraries. Finally, in the last study, the performance of the same antibody repertoire was compared in phage display selections as a genetic fusion to different phage capsid proteins and in different antibody formats, Fab vs. single chain variable fragment (ScFv), in order to find out the most suitable display platform for the library at hand. As a result of the studies, a novel gene library construction method, termed selective rolling circle amplification (sRCA), was developed. The method increases mutagenesis frequency close to 100% in the final library and the number of transformants over 100-fold compared to traditional primer extension mutagenesis. In the second study, Cre/loxP recombination was found to be an appropriate tool to resolve the DNA concatemer resulting from error-prone RCA (epRCA) mutagenesis into monomeric circular DNA units for higher efficiency transformation into E. coli. Library selections against antigens of various size in the fourth study demonstrated that diversity placed closer to the antigen binding site of antibodies supports generation of antibodies against haptens and peptides, whereas diversity at more peripheral locations is better suited for targeting proteins. The conclusion from a comparison of the display formats was that truncated capsid protein three (p3Δ) of filamentous phage was superior to the full-length p3 and protein nine (p9) in obtaining a high number of uniquely specific clones. Especially for digoxigenin, a difficult hapten target, the antibody repertoire as ScFv-p3Δ provided the clones with the highest affinity for binding. This thesis on the construction, design, and selection of gene variant libraries contributes to the practical know-how in directed evolution and contains useful information for scientists in the field to support their undertakings.
Resumo:
A high-frequency cyclonverter acts as a direct ac-to-ac power converter circuit that does not require a diode bidge rectifier. Bridgeless topology makes it possible to remove forward voltage drop losses that are present in a diode bridge. In addition, the on-state losses can be reduced to 1.5 times the on-state resistance of switches in half-bridge operation of the cycloconverter. A high-frequency cycloconverter is reviewed and the charging effect of the dc-capacitors in ``back-to-back'' or synchronous mode operation operation is analyzed. In addition, a control method is introduced for regulating dc-voltage of the ac-side capacitors in synchronous operation mode. The controller regulates the dc-capacitors and prevents switches from reaching overvoltage level. This can be accomplished by variating phase-shift between the upper and the lower gate signals. By adding phase-shift between the gate signal pairs, the charge stored in the energy storage capacitors can be discharged through the resonant load and substantially, the output resonant current amplitude can be improved. The above goals are analyzed and illustrated with simulation. Theory is supported with practical measurements where the proposed control method is implemented in an FPGA device and tested with a high-frequency cycloconverter using super-junction power MOSFETs as switching devices.
Resumo:
Results of subgroup analysis (SA) reported in randomized clinical trials (RCT) cannot be adequately interpreted without information about the methods used in the study design and the data analysis. Our aim was to show how often inaccurate or incomplete reports occur. First, we selected eight methodological aspects of SA on the basis of their importance to a reader in determining the confidence that should be placed in the author's conclusions regarding such analysis. Then, we reviewed the current practice of reporting these methodological aspects of SA in clinical trials in four leading journals, i.e., the New England Journal of Medicine, the Journal of the American Medical Association, the Lancet, and the American Journal of Public Health. Eight consecutive reports from each journal published after July 1, 1998 were included. Of the 32 trials surveyed, 17 (53%) had at least one SA. Overall, the proportion of RCT reporting a particular methodological aspect ranged from 23 to 94%. Information on whether the SA preceded/followed the analysis was reported in only 7 (41%) of the studies. Of the total possible number of items to be reported, NEJM, JAMA, Lancet and AJPH clearly mentioned 59, 67, 58 and 72%, respectively. We conclude that current reporting of SA in RCT is incomplete and inaccurate. The results of such SA may have harmful effects on treatment recommendations if accepted without judicious scrutiny. We recommend that editors improve the reporting of SA in RCT by giving authors a list of the important items to be reported.
Resumo:
The objective of this study was to evaluate the carcass suspension method concerning quality of sheep meat. Ten discard ewes were used, with approximately 62 kg of body weight. After slaughtering, flaying, evisceration and removal of head and paws, carcasses were longitudinally divided into two parts. Alternated sides of half carcasses were hanged by the tendon of the gastrocnemius (Treatment 1 - T1) and by the pelvic bone (Treatment 2 - T2) in cold store for a 24-hour period. Subsequently, the Semimembranosus muscle was removed from all half carcasses for meat quality analyses. The Semimembranosus muscles from the carcasses hanged by the pelvis suspension method presented higher softness than the same muscles from the carcasses hanged by the tendon of the gastrocnemius, with values of 1.99 kgf.cm-2 and 3.15 kgf.cm-2, respectively. Treatment 2 presented lower meat cooking losses than Treatment 1, with average values of 32.14 and 33.44%, respectively. The remaining meat quality parameters evaluated were not influenced by the carcass suspension method. We concluded that the carcass suspension method influenced meat softness and losses by cooking, with better results for carcasses hanged by the pelvic bone.
Resumo:
Seed dormancy is a frequent phenomenon in tropical species, causing slow and non-uniform germination. To overcome this, treatments such as scarification on abrasive surface and hot water are efficient. The objective of this study was to quantify seed germination with no treatment (Experiment 1) and identify an efficient method of breaking dormancy in Schizolobium amazonicum Huber ex Ducke seeds (Experiment 2). The effects of manual scarification on electric emery, water at 80ºC and 100ºC and manual scarification on wood sandpaper were studied. Seeds were sown either immediately after scarification or after immersion in water for 24h in a sand and sawdust mixture. Germination and hard seed percentages and germination speed were recorded and analyzed in a completely randomized design. Analysis of germination was carried out at six, nine, 12, 15, 18, 21 and 24 days after sowing as a 4x2 factorial design and through regression analysis. Treatment means of the remaining variables were compared by the Tukey test. Seed germination with no treatment started on the 7th day after sowing and reached 90% on the 2310th day (Experiment 1). Significant interaction between treatments to overcome dormancy and time of immersion in water was observed (Experiment 2). In general, immersion in water increased the germination in most evaluations. The regression analyses were significant for all treatments with exception of the control treatment and immersion in water at 80ºC. Germination speed was higher when seeds were scarified on an abrasive surface (emery and sandpaper) and, in these treatments, the germination ranged from 87% to 96%, with no hard seeds. S. amazonicum seeds coats are impermeable to water, which hinders quick and uniform germination. Scarification on electric emery followed by immediate sowing, scarification on sandpaper followed by immediate sowing and sowing after 24h were the most efficient treatments for overcoming dormancy in S. amazonicum seeds.
Resumo:
The purpose of this study was to determine novice t~ache~s' perceptions of th~ extent to which the Brock University teacher education program focused on strategies for promoting responsibility in students. Individual interviews were conducted with ten randomly selected teachers who were graduates of this teacher education program between the years of 1989 and 1992, and a follow-up group discussion activity, with the same teachers, was also held. Findings revealed that the topic of personal responsibility was discussed within various components of the program, including counselling group sessions, but that these discussions were often brief, indirect and inconsistent. Some of the strategies which the teachers used in their own classrooms to promote responsibility in students were ones which they had acquired from those counselling group °sessions or from associate teachers. Various strategies included: setting ~lear expectations of students with positive and negative consequences for behaviour (e.g., material rewards and detentions, respectively), cemmunic?ting'with other teachers an~ parents, and -. suspending students from school. A teacher's choice of any particular strategy seemed to be affected by his or her personality, teaching sUbject and region of employment, as well as certain aspects of the teacher education program. It was concluded that many of the teachers appeared to be controlling rude and vio~ent- behaviour, as opposed to promoting responsible behaviour. Recommendations were made for the pre-service program, as well as induction and inservice programs, to increase teacher preparedness for promoting responsible student behaviour. One of these recommendations addressed the need to help teachers learn how to effectively communicate with their students.
Resumo:
Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.
Resumo:
Ultrafast laser pulses have become an integral part of the toolbox of countless laboratories doing physics, chemistry, and biological research. The work presented here is motivated by a section in the ever-growing, interdisciplinary research towards understanding the fundamental workings of light-matter interactions. Specifically, attosecond pulses can be useful tools to obtain the desired insight. However access to, and the utility of, such pulses is dependent on the generation of intense, few-cycle, carrier-envelope-phase stabilized laser pulses. The presented work can be thought of as a sort of roadmap towards the latter. From the oscillator which provides the broadband seed to amplification methods, the integral pieces necessary for the generation of attosecond pulses are discussed. A range of topics from the fundamentals to design challenges is presented, outfitting the way towards the practical implementation of an intense few-cycle carrier-envelope-phase stabilized laser source.
Resumo:
Most psychophysical studies of object recognition have focussed on the recognition and representation of individual objects subjects had previously explicitely been trained on. Correspondingly, modeling studies have often employed a 'grandmother'-type representation where the objects to be recognized were represented by individual units. However, objects in the natural world are commonly members of a class containing a number of visually similar objects, such as faces, for which physiology studies have provided support for a representation based on a sparse population code, which permits generalization from the learned exemplars to novel objects of that class. In this paper, we present results from psychophysical and modeling studies intended to investigate object recognition in natural ('continuous') object classes. In two experiments, subjects were trained to perform subordinate level discrimination in a continuous object class - images of computer-rendered cars - created using a 3D morphing system. By comparing the recognition performance of trained and untrained subjects we could estimate the effects of viewpoint-specific training and infer properties of the object class-specific representation learned as a result of training. We then compared the experimental findings to simulations, building on our recently presented HMAX model of object recognition in cortex, to investigate the computational properties of a population-based object class representation as outlined above. We find experimental evidence, supported by modeling results, that training builds a viewpoint- and class-specific representation that supplements a pre-existing repre-sentation with lower shape discriminability but possibly greater viewpoint invariance.