886 resultados para Search-based technique
Resumo:
This thesis summarizes studies of a class of white dwarfs (WDs) called DQ WDs. White dwarfs are the remnants of ordinary stars like our Sun that have run out of nuclear fuel. WDs are classified according to the composition of their atmosphere and DQ WDs have an atmosphere made of helium and carbon. The carbon comes in either atomic or molecular form and in some cases the strong spectral absorption features cover the entire optical wavelength region. The research presented here utilizes spectropolarimetry, which is an observational technique that combines spectroscopy and polarization. Separately these allow to study the composition of a target and the inhomogeneous distribution of matter in the target. Put together they form a powerful tool to probe the physical properties in the atmosphere of a star. It is espacially good for detecting magnetic fields. The papers in this thesis describe efforts to do a survey of DQ white dwarfs with spectropolarimetry in order to search for magnetic fields in them. Paper I describes the discovery of a new magnetic cool DQ white dwarf, GJ841B. Initial modeling of molecular features on DQ WDs showed inconsistencies with observations. The first possible solution to this problem was stellar spots on these WDs. To investigate the matter, two DQ WDs were monitored for photometric variability that could arise from the presence of such spots. Paper II summarizes this short campaign and reports the negative results. Paper III reports observations of the rest of the objects in our survey. The paper includes the discovery of polarization from another cool DQ white dwarf, bringing the total of known magnetic cool DQs to three. Unfortunately the model used in this thesis cannot, in its present state, be used to model these objects nor are the observations of high enough spectroscopic resolution to do so.
Resumo:
The formal calibration procedure of a phase fraction meter is based on registering the outputs resulting from imposed phase fractions at known flow regimes. This can be straightforwardly done in laboratory conditions, but is rarely the case in industrial conditions, and particularly for on-site applications. Thus, there is a clear need for less restrictive calibration methods regarding to the prior knowledge of the complete set of inlet conditions. A new procedure is proposed in this work for the on-site construction of the calibration curve from total flown mass values of the homogeneous dispersed phase. The solution is obtained by minimizing a convenient error functional, assembled with data from redundant tests to handle the intrinsic ill-conditioned nature of the problem. Numerical simulations performed for increasing error levels demonstrate that acceptable calibration curves can be reconstructed, even from total mass measured within a precision of up to 2%. Consequently, the method can readily be applied, especially in on-site calibration problems in which classical procedures fail due to the impossibility of having a strict control of all the input/output parameters.
Resumo:
Information gained from the human genome project and improvements in compound synthesizing have increased the number of both therapeutic targets and potential lead compounds. This has evolved a need for better screening techniques to have a capacity to screen number of compound libraries against increasing amount of targets. Radioactivity based assays have been traditionally used in drug screening but the fluorescence based assays have become more popular in high throughput screening (HTS) as they avoid safety and waste problems confronted with radioactivity. In comparison to conventional fluorescence more sensitive detection is obtained with time-resolved luminescence which has increased the popularity of time-resolved fluorescence resonance energy transfer (TR-FRET) based assays. To simplify the current TR-FRET based assay concept the luminometric homogeneous single-label utilizing assay technique, Quenching Resonance Energy Transfer (QRET), was developed. The technique utilizes soluble quencher to quench non-specifically the signal of unbound fraction of lanthanide labeled ligand. One labeling procedure and fewer manipulation steps in the assay concept are saving resources. The QRET technique is suitable for both biochemical and cell-based assays as indicated in four studies:1) ligand screening study of β2 -adrenergic receptor (cell-based), 2) activation study of Gs-/Gi-protein coupled receptors by measuring intracellular concentration of cyclic adenosine monophosphate (cell-based), 3) activation study of G-protein coupled receptors by observing the binding of guanosine-5’-triphosphate (cell membranes), and 4) activation study of small GTP binding protein Ras (biochemical). Signal-to-background ratios were between 2.4 to 10 and coefficient of variation varied from 0.5 to 17% indicating their suitability to HTS use.
Resumo:
The objective of this study was to optimize and validate the solid-liquid extraction (ESL) technique for determination of picloram residues in soil samples. At the optimization stage, the optimal conditions for extraction of soil samples were determined using univariate analysis. Ratio soil/solution extraction, type and time of agitation, ionic strength and pH of extraction solution were evaluated. Based on the optimized parameters, the following method of extraction and analysis of picloram was developed: weigh 2.00 g of soil dried and sieved through a sieve mesh of 2.0 mm pore, add 20.0 mL of KCl concentration of 0.5 mol L-1, shake the bottle in the vortex for 10 seconds to form suspension and adjust to pH 7.00, with alkaline KOH 0.1 mol L-1. Homogenate the system in a shaker system for 60 minutes and then let it stand for 10 minutes. The bottles are centrifuged for 10 minutes at 3,500 rpm. After the settlement of the soil particles and cleaning of the supernatant extract, an aliquot is withdrawn and analyzed by high performance liquid chromatography. The optimized method was validated by determining the selectivity, linearity, detection and quantification limits, precision and accuracy. The ESL methodology was efficient for analysis of residues of the pesticides studied, with percentages of recovery above 90%. The limits of detection and quantification were 20.0 and 66.0 mg kg-1 soil for the PVA, and 40.0 and 132.0 mg kg-1 soil for the VLA. The coefficients of variation (CV) were equal to 2.32 and 2.69 for PVA and TH soils, respectively. The methodology resulted in low organic solvent consumption and cleaner extracts, as well as no purification steps for chromatographic analysis were required. The parameters evaluated in the validation process indicated that the ESL methodology is efficient for the extraction of picloram residues in soils, with low limits of detection and quantification.
Resumo:
The aim of this thesis was to identify the best grease removal technique with the application of low power of UV light to TiO2 coated grease filters. The treatment with various power series of ozone generating and ozone free lamps to normal grease filters and TiO2 coated grease filters were examined and the obtained results are compared to each other in this paper. The effect of ozone reaction was observed and compared with the effect of TiO2. The experiments were solely based on the photo oxidation and photo catalytic oxidation reactions. TiO2 is a green catalyst used in the photocatalytic reaction. Sunflower oil was used for grease production and tetracholoroethylene as a solvent. Grease samples were collected from the ventilation duct connected to the cooking hood system. Sample extraction was done in ultrasonic bath with the principle of sonication. The sample analysis was done by FTIR machine. The result determining the concentration of grease was the quantification of saturated C-H bonds in the chosen peak group of the spectrum. A very low power of UVC light functions perfectly with the Titanium dioxide. The experimental results have shown the combined treatment of titanium dioxide and UV light is an effective method in grease removal process. The photocatalytic reaction with titanium dioxide is better than photo oxidation reaction with ozone treatment. Photocatalytic reaction is environmentally friendly, energy efficient and economical.
Resumo:
The Swedish public health care organisation could very well be undergoing its most significant change since its specialisation during the late 19th and early 20th century. At the heart of this change is a move from using manual patient journals to electronic health records (EHR). EHR are complex integrated organisational wide information systems (IS) that promise great benefits and value as well as presenting great challenges to the organisation. The Swedish public health care is not the first organisation to implement integrated IS, and by no means alone in their quest for realising the potential benefits and value that it has to offer. As organisations invest in IS they embark on a journey of value-creation and capture. A journey where a costbased approach towards their IS-investments is replaced with a value-centric focus, and where the main challenges lie in the practical day-to-day task of finding ways to intertwine technology, people and business processes. This has however proven to be a problematic task. The problematic situation arises from a shift of perspective regarding how to manage IS in order to gain value. This is a shift from technology delivery to benefits delivery; from an ISimplementation plan to a change management plan. The shift gives rise to challenges related to the inability of IS and the elusiveness of value. As a response to these challenges the field of IS-benefits management has emerged offering a framework and a process in order to better understand and formalise benefits realisation activities. In this thesis the benefits realisation efforts of three Swedish hospitals within the same county council are studied. The thesis focuses on the participants of benefits analysis projects; their perceptions, judgments, negotiations and descriptions of potential benefits. The purpose is to address the process where organisations seek to identify which potential IS-benefits to pursue and realise, this in order to better understand what affects the process, so that realisation actions of potential IS-benefits could be supported. A qualitative case study research design is adopted and provides a framework for sample selection, data collection, and data analysis. It also provides a framework for discussions of validity, reliability and generalizability. Findings displayed a benefits fluctuation, which showed that participants’ perception of what constituted potential benefits and value changed throughout the formal benefits management process. Issues like structure, knowledge, expectation and experience affected perception differently, and this in the end changed the amount and composition of potential benefits and value. Five dimensions of benefits judgment were identified and used by participants when finding accommodations of potential benefits and value to pursue. Identified dimensions affected participants’ perceptions, which in turn affected the amount and composition of potential benefits. During the formal benefits management process participants shifted between judgment dimensions. These movements emerged through debates and interactions between participants. Judgments based on what was perceived as expected due to one’s role and perceived best for the organisation as a whole were the two dominant benefits judgment dimensions. A benefits negotiation was identified. Negotiations were divided into two main categories, rational and irrational, depending on participants’ drive when initiating and participating in negotiations. In each category three different types of negotiations were identified having different characteristics and generating different outcomes. There was also a benefits negotiation process identified that displayed management challenges corresponding to its five phases. A discrepancy was also found between how IS-benefits are spoken of and how actions of IS benefits realisation are understood. This was a discrepancy between an evaluation and a realisation focus towards IS value creation. An evaluation focus described IS-benefits as well-defined and measurable effects and a realisation focus spoke of establishing and managing an on-going place of value creation. The notion of valuescape was introduced in order to describe and support the understanding of IS value creation. Valuescape corresponded to a realisation focus and outlined a value configuration consisting of activities, logic, structure, drivers and role of IS.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
BCM (business continuity Management) is a holistic management process aiming at ensuring business continuity and building organizational resilience. Maturity models offer organizations a tool for evaluating their current maturity in a certain process. In the recent years BCM has been subject to international ISO standardization, while the interest of organizations to bechmark their state of BCM agains standards and the use of maturity models for these asessments has increased. However, although new standards have been introduced, very little attention has been paid to reviewing the existing BCM maturity models in research - especially in the light of the new ISO 22301 standard for BCM. In this thesis the existing BCM maturily models are carefully evaluated to determine whetherthey could be improved. In order to accomplish this, the compliance of the existing models to the ISO 22301 standard is measured and a framework for assessing a maturitymodel´s quality is defined. After carefully evaluating the existing frameworks for maturity model development and evaluation, an approach suggested by Becker et al. (2009) was chosen as the basis for the research. An additionto the procedural model a set of seven research guidelines proposed by the same authors was applied, drawing on the design-science research guidelines as suggested by Hevner et al. (2004). Furthermore, the existing models´ form and function was evaluated to address their usability. Based on the evaluation of the existing BCM maturity models, the existing models were found to have shortcomings in each dimension of the evaluation. Utilizing the best of the existing models, a draft version for an enhanced model was developed. This draft model was then iteratively developed by conducting six semi-structured interviews with BCM professionals in finland with the aim of validating and improving it. As a Result, a final version of the enhanced BCM maturity model was developed, conforming to the seven key clauses in the ISO 22301 standard and the maturity model development guidelines suggested by Becker et al. (2009).
Resumo:
The objective of this study was to identify restriction fragment length polymorphism (RFLP) markers linked to QTLs that control aluminum (Al) tolerance in maize. The strategy used was bulked segregant analysis (BSA) and the genetic material utilized was an F2 population derived from a cross between the Al-susceptible inbred line L53 and Al-tolerant inbred line L1327. Both lines were developed at the National Maize and Sorghum Research Center - CNPMS/EMBRAPA. The F2 population of 1554 individuals was evaluated in a nutrient solution containing a toxic concentration of Al and relative seminal root length (RSRL) was used as a phenotypic measure of tolerance. The RSRL frequency distribution was continuous, but skewed towards Al-susceptible individuals. Seedlings of the F2 population which scored the highest and the lowest RSRL values were transplanted to the field and subsequently selfed to obtain F3 families. Thirty F3 families (15 Al-susceptible and 15 Al-tolerant) were evaluated in nutrient solution, using an incomplete block design, to identify those with the smallest variances for aluminum tolerance and susceptibility. Six Al-susceptible and five Al-tolerant F3 families were chosen to construct one pool of Al-susceptible individuals, and another of Al-tolerant, herein referred as "bulks", based on average values of RSRL and genetic variance. One hundred and thirteen probes were selected, with an average interval of 30 cM, covering the 10 maize chromosomes. These were tested for their ability to discriminate the parental lines. Fifty-four of these probes were polymorphic, with 46 showing codominance. These probes were hybridized with DNA from the two contrasting bulks. Three RFLPs on chromosome 8 distinguished the bulks on the basis of band intensity. DNA of individuals from the bulks was hybridized with these probes and showed the presence of heterozygous individuals in each bulk. These results suggest that in maize there is a region related to aluminum tolerance on chromosome 8
Resumo:
The dissertation proposes two control strategies, which include the trajectory planning and vibration suppression, for a kinematic redundant serial-parallel robot machine, with the aim of attaining the satisfactory machining performance. For a given prescribed trajectory of the robot's end-effector in the Cartesian space, a set of trajectories in the robot's joint space are generated based on the best stiffness performance of the robot along the prescribed trajectory. To construct the required system-wide analytical stiffness model for the serial-parallel robot machine, a variant of the virtual joint method (VJM) is proposed in the dissertation. The modified method is an evolution of Gosselin's lumped model that can account for the deformations of a flexible link in more directions. The effectiveness of this VJM variant is validated by comparing the computed stiffness results of a flexible link with the those of a matrix structural analysis (MSA) method. The comparison shows that the numerical results from both methods on an individual flexible beam are almost identical, which, in some sense, provides mutual validation. The most prominent advantage of the presented VJM variant compared with the MSA method is that it can be applied in a flexible structure system with complicated kinematics formed in terms of flexible serial links and joints. Moreover, by combining the VJM variant and the virtual work principle, a systemwide analytical stiffness model can be easily obtained for mechanisms with both serial kinematics and parallel kinematics. In the dissertation, a system-wide stiffness model of a kinematic redundant serial-parallel robot machine is constructed based on integration of the VJM variant and the virtual work principle. Numerical results of its stiffness performance are reported. For a kinematic redundant robot, to generate a set of feasible joints' trajectories for a prescribed trajectory of its end-effector, its system-wide stiffness performance is taken as the constraint in the joints trajectory planning in the dissertation. For a prescribed location of the end-effector, the robot permits an infinite number of inverse solutions, which consequently yields infinite kinds of stiffness performance. Therefore, a differential evolution (DE) algorithm in which the positions of redundant joints in the kinematics are taken as input variables was employed to search for the best stiffness performance of the robot. Numerical results of the generated joint trajectories are given for a kinematic redundant serial-parallel robot machine, IWR (Intersector Welding/Cutting Robot), when a particular trajectory of its end-effector has been prescribed. The numerical results show that the joint trajectories generated based on the stiffness optimization are feasible for realization in the control system since they are acceptably smooth. The results imply that the stiffness performance of the robot machine deviates smoothly with respect to the kinematic configuration in the adjacent domain of its best stiffness performance. To suppress the vibration of the robot machine due to varying cutting force during the machining process, this dissertation proposed a feedforward control strategy, which is constructed based on the derived inverse dynamics model of target system. The effectiveness of applying such a feedforward control in the vibration suppression has been validated in a parallel manipulator in the software environment. The experimental study of such a feedforward control has also been included in the dissertation. The difficulties of modelling the actual system due to the unknown components in its dynamics is noticed. As a solution, a back propagation (BP) neural network is proposed for identification of the unknown components of the dynamics model of the target system. To train such a BP neural network, a modified Levenberg-Marquardt algorithm that can utilize an experimental input-output data set of the entire dynamic system is introduced in the dissertation. Validation of the BP neural network and the modified Levenberg- Marquardt algorithm is done, respectively, by a sinusoidal output approximation, a second order system parameters estimation, and a friction model estimation of a parallel manipulator, which represent three different application aspects of this method.
Resumo:
Genotyping techniques are valuable tools for the epidemiologic study of Staphylococcus aureus infections in the hospital setting. Pulsed-field gel electrophoresis (PFGE) is the current method of choice for S. aureus strain typing. However, the method is laborious and requires expensive equipment. In the present study, we evaluated the natural polymorphism of the genomic 16S-23S rRNA region for genotyping purpose, by PCR-based ribotyping. Three primer pairs were tested to determine the size of amplicons produced and to obtain better discrimination with agar gel electrophoresis and ethidium bromide staining. The resolution of the typing system was determined using sets of bacteria obtained from clinical specimens from a large tertiary care hospital. These included DNA from three samples obtained from a bacteremic patient, six strains with known and diverse PGFE patterns, and 88 strains collected over a 3-month period in the same hospital. Amplification patterns obtained from samples from the same patient were identical, and PFGE from samples known to be different produced three genotypes. Amplification of DNA from 61 methicillin-resistant isolates produced only one pattern. Methicillin-sensitive strains yielded a diversity of patterns, pointing to a true polyclonal distribution throughout the hospital (22 unique patterns from 27 strains). Computer-based software can be used to differentiate among identifiable strains, given the low number of bands and good characterization of PCR products. PCR-based ribotyping can be a useful technique for genotyping methicillin-sensitive S. aureus strains, but is of limited value for methicillin-resistant strains.
Resumo:
This thesis considers optimization problems arising in printed circuit board assembly. Especially, the case in which the electronic components of a single circuit board are placed using a single placement machine is studied. Although there is a large number of different placement machines, the use of collect-and-place -type gantry machines is discussed because of their flexibility and increasing popularity in the industry. Instead of solving the entire control optimization problem of a collect-andplace machine with a single application, the problem is divided into multiple subproblems because of its hard combinatorial nature. This dividing technique is called hierarchical decomposition. All the subproblems of the one PCB - one machine -context are described, classified and reviewed. The derived subproblems are then either solved with exact methods or new heuristic algorithms are developed and applied. The exact methods include, for example, a greedy algorithm and a solution based on dynamic programming. Some of the proposed heuristics contain constructive parts while others utilize local search or are based on frequency calculations. For the heuristics, it is made sure with comprehensive experimental tests that they are applicable and feasible. A number of quality functions will be proposed for evaluation and applied to the subproblems. In the experimental tests, artificially generated data from Markov-models and data from real-world PCB production are used. The thesis consists of an introduction and of five publications where the developed and used solution methods are described in their full detail. For all the problems stated in this thesis, the methods proposed are efficient enough to be used in the PCB assembly production in practice and are readily applicable in the PCB manufacturing industry.
Resumo:
Coronary artery disease (CAD) is a worldwide leading cause of death. The standard method for evaluating critical partial occlusions is coronary arteriography, a catheterization technique which is invasive, time consuming, and costly. There are noninvasive approaches for the early detection of CAD. The basis for the noninvasive diagnosis of CAD has been laid in a sequential analysis of the risk factors, and the results of the treadmill test and myocardial perfusion scintigraphy (MPS). Many investigators have demonstrated that the diagnostic applications of MPS are appropriate for patients who have an intermediate likelihood of disease. Although this information is useful, it is only partially utilized in clinical practice due to the difficulty to properly classify the patients. Since the seminal work of Lotfi Zadeh, fuzzy logic has been applied in numerous areas. In the present study, we proposed and tested a model to select patients for MPS based on fuzzy sets theory. A group of 1053 patients was used to develop the model and another group of 1045 patients was used to test it. Receiver operating characteristic curves were used to compare the performance of the fuzzy model against expert physician opinions, and showed that the performance of the fuzzy model was equal or superior to that of the physicians. Therefore, we conclude that the fuzzy model could be a useful tool to assist the general practitioner in the selection of patients for MPS.
Resumo:
The objective of this thesis is to develop and generalize further the differential evolution based data classification method. For many years, evolutionary algorithms have been successfully applied to many classification tasks. Evolution algorithms are population based, stochastic search algorithms that mimic natural selection and genetics. Differential evolution is an evolutionary algorithm that has gained popularity because of its simplicity and good observed performance. In this thesis a differential evolution classifier with pool of distances is proposed, demonstrated and initially evaluated. The differential evolution classifier is a nearest prototype vector based classifier that applies a global optimization algorithm, differential evolution, to determine the optimal values for all free parameters of the classifier model during the training phase of the classifier. The differential evolution classifier applies the individually optimized distance measure for each new data set to be classified is generalized to cover a pool of distances. Instead of optimizing a single distance measure for the given data set, the selection of the optimal distance measure from a predefined pool of alternative measures is attempted systematically and automatically. Furthermore, instead of only selecting the optimal distance measure from a set of alternatives, an attempt is made to optimize the values of the possible control parameters related with the selected distance measure. Specifically, a pool of alternative distance measures is first created and then the differential evolution algorithm is applied to select the optimal distance measure that yields the highest classification accuracy with the current data. After determining the optimal distance measures for the given data set together with their optimal parameters, all determined distance measures are aggregated to form a single total distance measure. The total distance measure is applied to the final classification decisions. The actual classification process is still based on the nearest prototype vector principle; a sample belongs to the class represented by the nearest prototype vector when measured with the optimized total distance measure. During the training process the differential evolution algorithm determines the optimal class vectors, selects optimal distance metrics, and determines the optimal values for the free parameters of each selected distance measure. The results obtained with the above method confirm that the choice of distance measure is one of the most crucial factors for obtaining higher classification accuracy. The results also demonstrate that it is possible to build a classifier that is able to select the optimal distance measure for the given data set automatically and systematically. After finding optimal distance measures together with optimal parameters from the particular distance measure results are then aggregated to form a total distance, which will be used to form the deviation between the class vectors and samples and thus classify the samples. This thesis also discusses two types of aggregation operators, namely, ordered weighted averaging (OWA) based multi-distances and generalized ordered weighted averaging (GOWA). These aggregation operators were applied in this work to the aggregation of the normalized distance values. The results demonstrate that a proper combination of aggregation operator and weight generation scheme play an important role in obtaining good classification accuracy. The main outcomes of the work are the six new generalized versions of previous method called differential evolution classifier. All these DE classifier demonstrated good results in the classification tasks.