943 resultados para Fourth order method
Resumo:
In this work we present the formulas for the calculation of exact three-center electron sharing indices (3c-ESI) and introduce two new approximate expressions for correlated wave functions. The 3c-ESI uses the third-order density, the diagonal of the third-order reduced density matrix, but the approximations suggested in this work only involve natural orbitals and occupancies. In addition, the first calculations of 3c-ESI using Valdemoro's, Nakatsuji's and Mazziotti's approximation for the third-order reduced density matrix are also presented for comparison. Our results on a test set of molecules, including 32 3c-ESI values, prove that the new approximation based on the cubic root of natural occupancies performs the best, yielding absolute errors below 0.07 and an average absolute error of 0.015. Furthemore, this approximation seems to be rather insensitive to the amount of electron correlation present in the system. This newly developed methodology provides a computational inexpensive method to calculate 3c-ESI from correlated wave functions and opens new avenues to approximate high-order reduced density matrices in other contexts, such as the contracted Schrödinger equation and the anti-Hermitian contracted Schrödinger equation
Resumo:
Fluent health information flow is critical for clinical decision-making. However, a considerable part of this information is free-form text and inabilities to utilize it create risks to patient safety and cost-effective hospital administration. Methods for automated processing of clinical text are emerging. The aim in this doctoral dissertation is to study machine learning and clinical text in order to support health information flow.First, by analyzing the content of authentic patient records, the aim is to specify clinical needs in order to guide the development of machine learning applications.The contributions are a model of the ideal information flow,a model of the problems and challenges in reality, and a road map for the technology development. Second, by developing applications for practical cases,the aim is to concretize ways to support health information flow. Altogether five machine learning applications for three practical cases are described: The first two applications are binary classification and regression related to the practical case of topic labeling and relevance ranking.The third and fourth application are supervised and unsupervised multi-class classification for the practical case of topic segmentation and labeling.These four applications are tested with Finnish intensive care patient records.The fifth application is multi-label classification for the practical task of diagnosis coding. It is tested with English radiology reports.The performance of all these applications is promising. Third, the aim is to study how the quality of machine learning applications can be reliably evaluated.The associations between performance evaluation measures and methods are addressed,and a new hold-out method is introduced.This method contributes not only to processing time but also to the evaluation diversity and quality. The main conclusion is that developing machine learning applications for text requires interdisciplinary, international collaboration. Practical cases are very different, and hence the development must begin from genuine user needs and domain expertise. The technological expertise must cover linguistics,machine learning, and information systems. Finally, the methods must be evaluated both statistically and through authentic user-feedback.
Resumo:
A simple, fast and sensitive spectrophotometric method for the determination of cefaclor in pharmaceutical raw and dosage forms based on reaction with ninhydrin is developed, optimized and validated. The purple color (Ruhemenn's purple) that resulted from the reaction was stabilized and measured at 560 nm. Beer's law is obeyed in the concentration range of 4-80 µg mL-1 with molar absorptivity of 1.42 × 10(5) L mole-1 cm-1. All variables including the reagent concentration, heating time, reaction temperature, color stability period, and cefaclor/ninhydrin ratio were studied in order to optimize the reaction conditions. No interference was observed from common pharmaceutical adjuvant. The developed method is easy to use, accurate and highly cost-effective for routine studies relative to HPLC and other techniques.
Resumo:
A derivative UV spectrophotometric method for determination of estradiol valerate in tablets was validated. The parameters specificity, linearity, precision, accuracy, limit of detection and limit of quantitation were studied according to validation guidelines. The first-order derivative spectra were obtained at N = 5, Δλ = 4.0 nm, and determinations were made at 270 nm. The method showed specificity and linearity in the concentration range of 0.20 to 0.40 mg mL-1. The intra and interday precision data demonstrated the method has good reproducibility. Accuracy was also evaluated and results were satisfactory. The proposed method was successfully applied to a pharmaceutical formulation.
Resumo:
The bioassay, first order derivative UV spectrophotometry and chromatographic methods for assaying fluconazole capsules were compared. They have shown great advantages over the earlier published methods. Using the first order derivative, the UV spectrophotometry method does not suffer interference of excipients. Validation parameters such as linearity, precision, accuracy, limit of detection and limit of quantitation were determined. All methods were linear and reliable within acceptable limits for antibiotic pharmaceutical preparations being accurate, precise and reproducible. The application of each method as a routine analysis should be investigated considering cost, simplicity, equipment, solvents, speed, and application to large or small workloads.
Resumo:
The dissertation is based on four articles dealing with recalcitrant lignin water purification. Lignin, a complicated substance and recalcitrant to most treatment technologies, inhibits seriously pulp and paper industry waste management. Therefore, lignin is studied, using WO as a process method for its degradation. A special attention is paid to the improvement in biodegradability and the reduction of lignin content, since they have special importance for any following biological treatment. In most cases wet oxidation is not used as a complete ' mineralization method but as a pre treatment in order to eliminate toxic components and to reduce the high level of organics produced. The combination of wet oxidation with a biological treatment can be a good option due to its effectiveness and its relatively low technology cost. The literature part gives an overview of Advanced Oxidation Processes (AOPs). A hot oxidation process, wet oxidation (WO), is investigated in detail and is the AOP process used in the research. The background and main principles of wet oxidation, its industrial applications, the combination of wet oxidation with other water treatment technologies, principal reactions in WO, and key aspects of modelling and reaction kinetics are presented. There is also given a wood composition and lignin characterization (chemical composition, structure and origin), lignin containing waters, lignin degradation and reuse possibilities, and purification practices for lignin containing waters. The aim of the research was to investigate the effect of the operating conditions of WO, such as temperature, partial pressure of oxygen, pH and initial concentration of wastewater, on the efficiency, and to enhance the process and estimate optimal conditions for WO of recalcitrant lignin waters. Two different waters are studied (a lignin water model solution and debarking water from paper industry) to give as appropriate conditions as possible. Due to the great importance of re using and minimizing the residues of industries, further research is carried out using residual ash of an Estonian power plant as a catalyst in wet oxidation of lignin-containing water. Developing a kinetic model that includes in the prediction such parameters as TOC gives the opportunity to estimate the amount of emerging inorganic substances (degradation rate of waste) and not only the decrease of COD and BOD. The degradation target compound, lignin is included into the model through its COD value (CODligning). Such a kinetic model can be valuable in developing WO treatment processes for lignin containing waters, or other wastewaters containing one or more target compounds. In the first article, wet oxidation of "pure" lignin water was investigated as a model case with the aim of degrading lignin and enhancing water biodegradability. The experiments were performed at various temperatures (110 -190°C), partial oxygen pressures (0.5 -1.5 MPa) and pH (5, 9 and 12). The experiments showed that increasing the temperature notably improved the processes efficiency. 75% lignin reduction was detected at the lowest temperature tested and lignin removal improved to 100% at 190°C. The effect of temperature on the COD removal rate was lower, but clearly detectable. 53% of organics were oxidized at 190°C. The effect of pH occurred mostly on lignin removal. Increasing the pH enhanced the lignin removal efficiency from 60% to nearly 100%. A good biodegradability ratio (over 0.5) was generally achieved. The aim of the second article was to develop a mathematical model for "pure" lignin wet oxidation using lumped characteristics of water (COD, BOD, TOC) and lignin concentration. The model agreed well with the experimental data (R2 = 0.93 at pH 5 and 12) and concentration changes during wet oxidation followed adequately the experimental results. The model also showed correctly the trend of biodegradability (BOD/COD) changes. In the third article, the purpose of the research was to estimate optimal conditions for wet oxidation (WO) of debarking water from the paper industry. The WO experiments were' performed at various temperatures, partial oxygen pressures and pH. The experiments showed that lignin degradation and organics removal are affected remarkably by temperature and pH. 78-97% lignin reduction was detected at different WO conditions. Initial pH 12 caused faster removal of tannins/lignin content; but initial pH 5 was more effective for removal of total organics, represented by COD and TOC. Most of the decrease in organic substances concentrations occurred in the first 60 minutes. The aim of the fourth article was to compare the behaviour of two reaction kinetic models, based on experiments of wet oxidation of industrial debarking water under different conditions. The simpler model took into account only the changes in COD, BOD and TOC; the advanced model was similar to the model used in the second article. Comparing the results of the models, the second model was found to be more suitable for describing the kinetics of wet oxidation of debarking water. The significance of the reactions involved was compared on the basis of the model: for instance, lignin degraded first to other chemically oxidizable compounds rather than directly to biodegradable products. Catalytic wet oxidation of lignin containing waters is briefly presented at the end of the dissertation. Two completely different catalysts were used: a commercial Pt catalyst and waste power plant ash. CWO showed good performance using 1 g/L of residual ash gave lignin removal of 86% and COD removal of 39% at 150°C (a lower temperature and pressure than with WO). It was noted that the ash catalyst caused a remarkable removal rate for lignin degradation already during the pre heating for `zero' time, 58% of lignin was degraded. In general, wet oxidation is not recommended for use as a complete mineralization method, but as a pre treatment phase to eliminate toxic or difficultly biodegradable components and to reduce the high level of organics. Biological treatment is an appropriate post treatment method since easily biodegradable organic matter remains after the WO process. The combination of wet oxidation with subsequent biological treatment can be an effective option for the treatment of lignin containing waters.
Resumo:
MgO is an important inorganic material, which can be used in many aspects, such as catalyst, toxic-waste remediation agent, adsorbent, and others. In order to make use of MgO, nano-MgO was prepared by ultrasonic method using Mg (CH3COO)2.2H2O as precursor, NaOH aqueous solution as precipitant in this paper. Effect factors on MgO nano-particle size were investigated. Characteristics of samples were measured by TGA, XRD, TEM, and others techniques. The results showed that the size of nano-MgO about 4 nm could be obtained under the following conditions (ultrasonic time 20 min, ultrasonic power 250 W, titration rate of NaOH 0.25 mL/min, NaOH concentration 0.48 mol/L, calcinations temperature 410 °C, calcination time 1.5 h, heating rate of calcination 5 °C/min). It was a very simple and effective method to prepare nano-MgO.
Resumo:
In order to develop a molecular method for detection and identification of Xanthomonas campestris pv. viticola (Xcv) the causal agent of grapevine bacterial canker, primers were designed based on the partial sequence of the hrpB gene. Primer pairs Xcv1F/Xcv3R and RST2/Xcv3R, which amplified 243- and 340-bp fragments, respectively, were tested for specificity and sensitivity in detecting DNA from Xcv. Amplification was positive with DNA from 44 Xcv strains and with DNA from four strains of X. campestris pv. mangiferaeindicae and five strains of X. axonopodis pv. passiflorae, with both primer pairs. However, the enzymatic digestion of PCR products could differentiate Xcv strains from the others. None of the primer pairs amplified DNA from grapevine, from 20 strains of nonpathogenic bacteria from grape leaves and 10 strains from six representative genera of plant pathogenic bacteria. Sensitivity of primers Xcv1F/Xcv3R and RST2/Xcv3R was 10 pg and 1 pg of purified Xcv DNA, respectively. Detection limit of primers RST2/Xcv3R was 10(4) CFU/ml, but this limit could be lowered to 10² CFU/ml with a second round of amplification using the internal primer Xcv1F. Presence of Xcv in tissues of grapevine petioles previously inoculated with Xcv could not be detected by PCR using macerated extract added directly in the reaction. However, amplification was positive with the introduction of an agar plating step prior to PCR. Xcv could be detected in 1 µl of the plate wash and from a cell suspension obtained from a single colony. Bacterium identity was confirmed by RFLP analysis of the RST2/Xcv3R amplification products digested with Hae III.
Resumo:
The paper supports a dialectical interpretation of Wittgenstein's method focusing on the analysis of the conditions of experience presented in his Philosophical Remarks. By means of a close reading of some key passages dealing with solipsism I will try to lay bare their self-subverting character: the fact that they amount to miniature dialectical exercises offering specific directions to pass from particular pieces of disguised nonsense to corresponding pieces of patent nonsense. Yet, in order to follow those directions one needs to allow oneself to become simultaneously tempted by and suspicious of their all-too-evident "metaphysical tone" - a tone which, as we shall see, is particularly manifest in those claims purporting to state what can or cannot be the case, and, still more particularly, those purporting to state what can or cannot be done in language or thought, thus leading to the view that there are some (determinate) things which are ineffable or unthinkable. I conclude by suggesting that in writing those remarks Wittgenstein was still moved by an ethical project, which gets conspicuously displayed in these reiterations of his attempts to cure the readers (and himself) from some of the temptations expressed by solipsism.
Resumo:
The goal of this study is to examine the intelligent home business network in order to determine which part of the network has the best financial abilities to produce new business models and products/services by using financial statement analysis. A group of 377 studied limited companies is divided into four examined segments based on their offering in producing intelligent homes. The segments are customer service providers, system integrators, subsystem suppliers and component suppliers. Eight different key figures are calculated from each of the companies to get a comprehensive view of their financial performances, after which each of the segments is studied statistically to determine the performances of the whole segments. The actual performance differences between the segments are calculated by using the multi-criteria decision analysis method in which the performances of the key figures are graded and each key figure is weighted according to its importance for the goal of the study. The results of this analysis showed that subsystem suppliers have the best financial performance. Second best are system integrators, third are customer service providers and fourth component suppliers. None of the segments were strikingly poor, but even component suppliers were rather reasonable in their performance; so, it can be said that no part of the intelligent home business network has remarkably inadequate financial abilities to develop new business models and products/services.
Resumo:
In order to verify Point-Centered Quarter Method (PCQM) accuracy and efficiency, using different numbers of individuals by per sampled area, in 28 quarter points in an Araucaria forest, southern Paraná, Brazil. Three variations of the PCQM were used for comparison associated to the number of sampled individual trees: standard PCQM (SD-PCQM), with four sampled individuals by point (one in each quarter), second measured (VAR1-PCQM), with eight sampled individuals by point (two in each quarter), and third measuring (VAR2-PCQM), with 16 sampled individuals by points (four in each quarter). Thirty-one species of trees were recorded by the SD-PCQM method, 48 by VAR1-PCQM and 60 by VAR2-PCQM. The level of exhaustiveness of the vegetation census and diversity index showed an increasing number of individuals considered by quadrant, indicating that VAR2-PCQM was the most accurate and efficient method when compared with VAR1-PCQM and SD-PCQM.
Resumo:
The objective of this master’s thesis was to find means and measures by which an industrial manufacturing company could find cost-competitive solutions in a price-driven market situation. Initially, it was essential to find individual high customer value spots from the offering. The study addressed this in an innovative way by providing the desired information for the entire range of offering. The research was carried out using the constructivist approach method. Firstly, the project and solution marketing literature was reviewed in order to establish an overview of the processes and strategies involved. This information was then used in conjunction with the company’s specific offering data to conduct a construction. This construction can be used in various functions within the target company to streamline and optimize the specifications into so-called “preferred offers”. The study also presents channels and methods with which to exploit the construction in practice in the target company. The study aimed to bring concrete improvements in competitiveness and profitability. One result of this study was the creation of a training material for internal use. This material is now used in several countries to inform and present to the staff the cost-competitive aspects of the target company’s offering.
Resumo:
This study is dedicated to search engine marketing (SEM). It aims for developing a business model of SEM firms and to provide explicit research of trustworthy practices of virtual marketing companies. Optimization is a general term that represents a variety of techniques and methods of the web pages promotion. The research addresses optimization as a business activity, and it explains its role for the online marketing. Additionally, it highlights issues of unethical techniques utilization by marketers which created relatively negative attitude to them on the Internet environment. Literature insight combines in the one place both technical and economical scientific findings in order to highlight technological and business attributes incorporated in SEM activities. Empirical data regarding search marketers was collected via e-mail questionnaires. 4 representatives of SEM companies were engaged in this study to accomplish the business model design. Additionally, the fifth respondent was a representative of the search engine portal, who provided insight on relations between search engines and marketers. Obtained information of the respondents was processed qualitatively. Movement of commercial organizations to the online market increases demand on promotional programs. SEM is the largest part of online marketing, and it is a prerogative of search engines portals. However, skilled users, or marketers, are able to implement long-term marketing programs by utilizing web page optimization techniques, key word consultancy or content optimization to increase web site visibility to search engines and, therefore, user’s attention to the customer pages. SEM firms are related to small knowledge-intensive businesses. On the basis of data analysis the business model was constructed. The SEM model includes generalized constructs, although they represent a wider amount of operational aspects. Constructing blocks of the model includes fundamental parts of SEM commercial activity: value creation, customer, infrastructure and financial segments. Also, approaches were provided on company’s differentiation and competitive advantages evaluation. It is assumed that search marketers should apply further attempts to differentiate own business out of the large number of similar service providing companies. Findings indicate that SEM companies are interested in the increasing their trustworthiness and the reputation building. Future of the search marketing is directly depending on search engines development.
Resumo:
The aim of this thesis is to examine whether the pricing anomalies exists in the Finnish stock markets by comparing the performance of quantile portfolios that are formed on the basis of either individual valuation ratios, composite value measures or combined value and momentum indicators. All the research papers included in the thesis show evidence of value anomalies in the Finnish stock markets. In the first paper, the sample of stocks over the 1991-2006 period is divided into quintile portfolios based on four individual valuation ratios (i.e., E/P, EBITDA/EV, B/P, and S/P) and three hybrids of them (i.e. composite value measures). The results show the superiority of composite value measures as selection criterion for value stocks, particularly when EBITDA/EV is employed as earnings multiple. The main focus of the second paper is on the impact of the holding period length on performance of value strategies. As an extension to the first paper, two more individual ratios (i.e. CF/P and D/P) are included in the comparative analysis. The sample of stocks over 1993- 2008 period is divided into tercile portfolios based on six individual valuation ratios and three hybrids of them. The use of either dividend yield criterion or one of three composite value measures being examined results in best value portfolio performance according to all performance metrics used. Parallel to the findings of many international studies, our results from performance comparisons indicate that for the sample data employed, the yearly reformation of portfolios is not necessarily optimal in order to maximally gain from the value premium. Instead, the value investor may extend his holding period up to 5 years without any decrease in long-term portfolio performance. The same holds also for the results of the third paper that examines the applicability of data envelopment analysis (DEA) method in discriminating the undervalued stocks from overvalued ones. The fourth paper examines the added value of combining price momentum with various value strategies. Taking account of the price momentum improves the performance of value portfolios in most cases. The performance improvement is greatest for value portfolios that are formed on the basis of the 3-composite value measure which consists of D/P, B/P and EBITDA/EV ratios. The risk-adjusted performance can be enhanced further by following 130/30 long-short strategy in which the long position of value winner stocks is leveraged by 30 percentages while simultaneously selling short glamour loser stocks by the same amount. Average return of the long-short position proved to be more than double stock market average coupled with the volatility decrease. The fifth paper offers a new approach to combine value and momentum indicators into a single portfolio-formation criterion using different variants of DEA models. The results throughout the 1994-2010 sample period shows that the top-tercile portfolios outperform both the market portfolio and the corresponding bottom-tercile portfolios. In addition, the middle-tercile portfolios also outperform the comparable bottom-tercile portfolios when DEA models are used as a basis for stock classification criteria. To my knowledge, such strong performance differences have not been reported in earlier peer-reviewed studies that have employed the comparable quantile approach of dividing stocks into portfolios. Consistently with the previous literature, the division of the full sample period into bullish and bearish periods reveals that the top-quantile DEA portfolios lose far less of their value during the bearish conditions than do the corresponding bottom portfolios. The sixth paper extends the sample period employed in the fourth paper by one year (i.e. 1993- 2009) covering also the first years of the recent financial crisis. It contributes to the fourth paper by examining the impact of the stock market conditions on the main results. Consistently with the fifth paper, value portfolios lose much less of their value during bearish conditions than do stocks on average. The inclusion of a momentum criterion somewhat adds value to an investor during bullish conditions, but this added value turns to negative during bearish conditions. During bear market periods some of the value loser portfolios perform even better than their value winner counterparts. Furthermore, the results show that the recent financial crisis has reduced the added value of using combinations of momentum and value indicators as portfolio formation criteria. However, since the stock markets have historically been bullish more often than bearish, the combination of the value and momentum criteria has paid off to the investor despite the fact that its added value during bearish periods is negative, on an average.
Resumo:
The purpose of this study is to develop a crowdsourced videographic research method for consumer culture research. Videography provides opportunities for expressing contextual and culturally embedded relations. Thus, developing new ways to conduct videographic research is meaningful. This study develops the crowdsourced videographic method based on a literature review and evaluation of a focal study. The literature review follows a qualitative systematic review process. Through the literature review, based on different methodological, crowdsourcing and consumer research related literature, this study defines the method, its application process and evaluation criteria. Furthermore, the evaluation of the focal study, where the method was applied, completes the study. This study applies professional review with self-evaluation as a form of evaluation, drawing from secondary data including research task description, screenshots of the mobile application used in the focal study, videos collected from the participants, and self-evaluation by the author. The focal study is analyzed according to its suitability to consumer culture research, research process and quality. Definitions and descriptions of the research method, its process and quality criteria form the theoretical contribution of this study. Evaluating the focal study using these definitions underlines some best practices of this type of research, generating the practical contribution of this study. Finally, this study provides ideas for future research. First, defining the boundaries of the use of crowdsourcing in various parts of conducting research. Second, improving the method by applying it to new research contexts. Third, testing how changes in one dimension of the crowdsourcing models interact with other dimension. Fourth, comparing the quality criteria applied in this study to various other quality criteria to improve the method’s usefulness. Overall, this study represents a starting point for further development of the crowdsourced videographic research method.