947 resultados para Lead based paint
Resumo:
Many companies today struggle with problems they face around sales lead management. They are suffering from inconsistent quality of leads, they miss clear sales opportunities and even cannot handle well their internal marketing lists. Meanwhile customers are better and better equipped with means to easily initiate contact via internet, via call centers etc. Investing in lead generation activities that are built on a bad process is not a good idea. Better than asking how to get more leads, companies should ask how to get better quality leads and invest in improving lead management. This study looks sales lead management as a multi step process where a company generates leads in controlled environment, qualifies them and hands over to the sales cycle. As a final step, organization needs to analyze the incomes and successes of different lead sources. Most often in sales lead management a process improvement requires setting up additional controls to enable proper tracking of all leads. A sales lead management process model for the case company is built based on the findings. Implementing the new model involves changes and improvements in some key areas of current process. Starting from the very beginning, these include redefining a bit the lead definition and revising the criteria set for qualified lead. There are some improvements to be done in the system side to enable the proposed model. Lastly a setting for responsible roles is presented.
Resumo:
DNA is nowadays swabbed routinely to investigate serious and volume crimes, but research remains scarce when it comes to determining the criteria that may impact the success rate of DNA swabs taken on different surfaces and situations. To investigate these criteria in fully operational conditions, DNA analysis results of 4772 swabs taken by the forensic unit of a police department in Western Switzerland over a 2.5-year period (2012-2014) in volume crime cases were considered. A representative and random sample of 1236 swab analyses was extensively examined and codified, describing several criteria such as whether the swabbing was performed at the scene or in the lab, the zone of the scene where it was performed, the kind of object or surface that was swabbed, whether the target specimen was a touch surface or a biological fluid, and whether the swab targeted a single surface or combined different surfaces. The impact of each criterion and of their combination was assessed in regard to the success rate of DNA analysis, measured through the quality of the resulting profile, and whether the profile resulted in a hit in the national database or not. Results show that some situations - such as swabs taken on door and window handles for instance - have a higher success rate than average swabs. Conversely, other situations lead to a marked decrease in the success rate, which should discourage further analyses of such swabs. Results also confirm that targeting a DNA swab on a single surface is preferable to swabbing different surfaces with the intent to aggregate cells deposited by the offender. Such results assist in predicting the chance that the analysis of a swab taken in a given situation will lead to a positive result. The study could therefore inform an evidence-based approach to decision-making at the crime scene (what to swab or not) and at the triage step (what to analyse or not), contributing thus to save resource and increase the efficiency of forensic science efforts.
Resumo:
Due to the existence of free software and pedagogical guides, the use of Data Envelopment Analysis (DEA) has been further democratized in recent years. Nowadays, it is quite usual for practitioners and decision makers with no or little knowledge in operational research to run their own efficiency analysis. Within DEA, several alternative models allow for an environmental adjustment. Four alternative models, each user-friendly and easily accessible to practitioners and decision makers, are performed using empirical data of 90 primary schools in the State of Geneva, Switzerland. Results show that the majority of alternative models deliver divergent results. From a political and a managerial standpoint, these diverging results could lead to potentially ineffective decisions. As no consensus emerges on the best model to use, practitioners and decision makers may be tempted to select the model that is right for them, in other words, the model that best reflects their own preferences. Further studies should investigate how an appropriate multi-criteria decision analysis method could help decision makers to select the right model.
Resumo:
BACKGROUND: High interindividual variability in plasma concentrations of risperidone and its active metabolite, 9-hydroxyrisperidone, may lead to suboptimal drug concentration. OBJECTIVE: Using a population pharmacokinetic approach, we aimed to characterize the genetic and non-genetic sources of variability affecting risperidone and 9-hydroxyrisperidone pharmacokinetics, and relate them to common side effects. METHODS: Overall, 150 psychiatric patients (178 observations) treated with risperidone were genotyped for common polymorphisms in NR1/2, POR, PPARα, ABCB1, CYP2D6 and CYP3A genes. Plasma risperidone and 9-hydroxyrisperidone were measured, and clinical data and common clinical chemistry parameters were collected. Drug and metabolite concentrations were analyzed using non-linear mixed effect modeling (NONMEM(®)). Correlations between trough concentrations of the active moiety (risperidone plus 9-hydroxyrisperidone) and common side effects were assessed using logistic regression and linear mixed modeling. RESULTS: The cytochrome P450 (CYP) 2D6 phenotype explained 52 % of interindividual variability in risperidone pharmacokinetics. The area under the concentration-time curve (AUC) of the active moiety was found to be 28 % higher in CYP2D6 poor metabolizers compared with intermediate, extensive and ultrarapid metabolizers. No other genetic markers were found to significantly affect risperidone concentrations. 9-hydroxyrisperidone elimination was decreased by 26 % with doubling of age. A correlation between trough predicted concentration of the active moiety and neurologic symptoms was found (p = 0.03), suggesting that a concentration >40 ng/mL should be targeted only in cases of insufficient, or absence of, response. CONCLUSIONS: Genetic polymorphisms of CYP2D6 play an important role in risperidone, 9-hydroxyrisperidone and active moiety plasma concentration variability, which were associated with common side effects. These results highlight the importance of a personalized dosage adjustment during risperidone treatment.
Resumo:
Occupational hygiene practitioners typically assess the risk posed by occupational exposure by comparing exposure measurements to regulatory occupational exposure limits (OELs). In most jurisdictions, OELs are only available for exposure by the inhalation pathway. Skin notations are used to indicate substances for which dermal exposure may lead to health effects. However, these notations are either present or absent and provide no indication of acceptable levels of exposure. Furthermore, the methodology and framework for assigning skin notation differ widely across jurisdictions resulting in inconsistencies in the substances that carry notations. The UPERCUT tool was developed in response to these limitations. It helps occupational health stakeholders to assess the hazard associated with dermal exposure to chemicals. UPERCUT integrates dermal quantitative structure-activity relationships (QSARs) and toxicological data to provide users with a skin hazard index called the dermal hazard ratio (DHR) for the substance and scenario of interest. The DHR is the ratio between the estimated 'received' dose and the 'acceptable' dose. The 'received' dose is estimated using physico-chemical data and information on the exposure scenario provided by the user (body parts exposure and exposure duration), and the 'acceptable' dose is estimated using inhalation OELs and toxicological data. The uncertainty surrounding the DHR is estimated with Monte Carlo simulation. Additional information on the selected substances includes intrinsic skin permeation potential of the substance and the existence of skin notations. UPERCUT is the only available tool that estimates the absorbed dose and compares this to an acceptable dose. In the absence of dermal OELs it provides a systematic and simple approach for screening dermal exposure scenarios for 1686 substances.
Resumo:
OBJECTIVE: The aim of this study is to review highly cited articles that focus on non-publication of studies, and to develop a consistent and comprehensive approach to defining (non-) dissemination of research findings. SETTING: We performed a scoping review of definitions of the term 'publication bias' in highly cited publications. PARTICIPANTS: Ideas and experiences of a core group of authors were collected in a draft document, which was complemented by the findings from our literature search. INTERVENTIONS: The draft document including findings from the literature search was circulated to an international group of experts and revised until no additional ideas emerged and consensus was reached. PRIMARY OUTCOMES: We propose a new approach to the comprehensive conceptualisation of (non-) dissemination of research. SECONDARY OUTCOMES: Our 'What, Who and Why?' approach includes issues that need to be considered when disseminating research findings (What?), the different players who should assume responsibility during the various stages of conducting a clinical trial and disseminating clinical trial documents (Who?), and motivations that might lead the various players to disseminate findings selectively, thereby introducing bias in the dissemination process (Why?). CONCLUSIONS: Our comprehensive framework of (non-) dissemination of research findings, based on the results of a scoping literature search and expert consensus will facilitate the development of future policies and guidelines regarding the multifaceted issue of selective publication, historically referred to as 'publication bias'.
Resumo:
The signalling function of melanin-based colouration is debated. Sexual selection theory states that ornaments should be costly to produce, maintain, wear or display to signal quality honestly to potential mates or competitors. An increasing number of studies supports the hypothesis that the degree of melanism covaries with aspects of body condition (e.g. body mass or immunity), which has contributed to change the initial perception that melanin-based colour ornaments entail no costs. Indeed, the expression of many (but not all) melanin-based colour traits is weakly sensitive to the environment but strongly heritable suggesting that these colour traits are relatively cheap to produce and maintain, thus raising the question of how such colour traits could signal quality honestly. Here I review the production, maintenance and wearing/displaying costs that can generate a correlation between melanin-based colouration and body condition, and consider other evolutionary mechanisms that can also lead to covariation between colour and body condition. Because genes controlling melanic traits can affect numerous phenotypic traits, pleiotropy could also explain a linkage between body condition and colouration. Pleiotropy may result in differently coloured individuals signalling different aspects of quality that are maintained by frequency-dependent selection or local adaptation. Colouration may therefore not signal absolute quality to potential mates or competitors (e.g. dark males may not achieve a higher fitness than pale males); otherwise genetic variation would be rapidly depleted by directional selection. As a consequence, selection on heritable melanin-based colouration may not always be directional, but mate choice may be conditional to environmental conditions (i.e. context-dependent sexual selection). Despite the interest of evolutionary biologists in the adaptive value of melanin-based colouration, its actual role in sexual selection is still poorly understood.
Resumo:
The Argentina National Road 7 that crosses the Andes Cordillera within the Mendoza province to connect Santiago de Chile and Buenos Aires is particularly affected by natural hazards requiring risk management. Integrated in a research plan that intends to produce landslide susceptibility maps, we aimed in this study to detect large slope movements by applying a satellite radar interferometric analysis using Envisat data, acquired between 2005 and 2010. We were finally able to identify two large slope deformations in sandstone and clay deposits along gentle shores of the Potrerillos dam reservoir, with cumulated displacements higher than 25mm in 5years and towards the reservoir. There is also a body of evidences that these large slope deformations are actually influenced by the seasonal reservoir level variations. This study shows that very detailed information, such as surface displacements and above all water level variation, can be extracted from spaceborne remote sensing techniques; nevertheless, the limitations of InSAR for the present dataset are discussed here. Such analysis can then lead to further field investigations to understand more precisely the destabilising processes acting on these slope deformations.
Resumo:
Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.
Resumo:
A solid phase extraction procedure using Amberlite XAD-1180/Pyrocatechol violet (PV) chelating resin for the determination of iron and lead ions in various environmental samples was established. The procedure is based on the sorption of lead(II) and iron(III) ions onto the resin at pH 9, followed by elution with 1 mol/L HNO3 and determination by flame atomic absorption spectrometry. The influence of alkaline, earth alkaline and some transition metals, as interferents, are discussed. The recoveries for the spiked analytes were greater than 95%. The detection limits for lead and iron by FAAS were 0.37 µg/L and 0.20 µg/L, respectively. Validation of the method described here was performed by using three certified reference materials (SRM 1515 Apple Leaves, SRM 2711 Montana Soil and NRCC-SLRS-4 Riverine Water). The procedure was successfully applied to natural waters and human hair.
Resumo:
A new analytical method was developed to non-destructively determine pH and degree of polymerisation (DP) of cellulose in fibres in 19th 20th century painting canvases, and to identify the fibre type: cotton, linen, hemp, ramie or jute. The method is based on NIR spectroscopy and multivariate data analysis, while for calibration and validation a reference collection of 199 historical canvas samples was used. The reference collection was analysed destructively using microscopy and chemical analytical methods. Partial least squares regression was used to build quantitative methods to determine pH and DP, and linear discriminant analysis was used to determine the fibre type. To interpret the obtained chemical information, an expert assessment panel developed a categorisation system to discriminate between canvases that may not be fit to withstand excessive mechanical stress, e.g. transportation. The limiting DP for this category was found to be 600. With the new method and categorisation system, canvases of 12 Dalí paintings from the Fundació Gala-Salvador Dalí (Figueres, Spain) were non-destructively analysed for pH, DP and fibre type, and their fitness determined, which informs conservation recommendations. The study demonstrates that collection-wide canvas condition surveys can be performed efficiently and non-destructively, which could significantly improve collection management.
Resumo:
The focus of this dissertation is to develop finite elements based on the absolute nodal coordinate formulation. The absolute nodal coordinate formulation is a nonlinear finite element formulation, which is introduced for special requirements in the field of flexible multibody dynamics. In this formulation, a special definition for the rotation of elements is employed to ensure the formulation will not suffer from singularities due to large rotations. The absolute nodal coordinate formulation can be used for analyzing the dynamics of beam, plate and shell type structures. The improvements of the formulation are mainly concentrated towards the description of transverse shear deformation. Additionally, the formulation is verified by using conventional iso-parametric solid finite element and geometrically exact beam theory. Previous claims about especially high eigenfrequencies are studied by introducing beam elements based on the absolute nodal coordinate formulation in the framework of the large rotation vector approach. Additionally, the same high eigenfrequency problem is studied by using constraints for transverse deformation. It was determined that the improvements for shear deformation in the transverse direction lead to clear improvements in computational efficiency. This was especially true when comparative stress must be defined, for example when using elasto-plastic material. Furthermore, the developed plate element can be used to avoid certain numerical problems, such as shear and curvature lockings. In addition, it was shown that when compared to conventional solid elements, or elements based on nonlinear beam theory, elements based on the absolute nodal coordinate formulation do not lead to an especially stiff system for the equations of motion.
Resumo:
The drug discovery process is facing new challenges in the evaluation process of the lead compounds as the number of new compounds synthesized is increasing. The potentiality of test compounds is most frequently assayed through the binding of the test compound to the target molecule or receptor, or measuring functional secondary effects caused by the test compound in the target model cells, tissues or organism. Modern homogeneous high-throughput-screening (HTS) assays for purified estrogen receptors (ER) utilize various luminescence based detection methods. Fluorescence polarization (FP) is a standard method for ER ligand binding assay. It was used to demonstrate the performance of two-photon excitation of fluorescence (TPFE) vs. the conventional one-photon excitation method. As result, the TPFE method showed improved dynamics and was found to be comparable with the conventional method. It also held potential for efficient miniaturization. Other luminescence based ER assays utilize energy transfer from a long-lifetime luminescent label e.g. lanthanide chelates (Eu, Tb) to a prompt luminescent label, the signal being read in a time-resolved mode. As an alternative to this method, a new single-label (Eu) time-resolved detection method was developed, based on the quenching of the label by a soluble quencher molecule when displaced from the receptor to the solution phase by an unlabeled competing ligand. The new method was paralleled with the standard FP method. It was shown to yield comparable results with the FP method and found to hold a significantly higher signal-tobackground ratio than FP. Cell-based functional assays for determining the extent of cell surface adhesion molecule (CAM) expression combined with microscopy analysis of the target molecules would provide improved information content, compared to an expression level assay alone. In this work, immune response was simulated by exposing endothelial cells to cytokine stimulation and the resulting increase in the level of adhesion molecule expression was analyzed on fixed cells by means of immunocytochemistry utilizing specific long-lifetime luminophore labeled antibodies against chosen adhesion molecules. Results showed that the method was capable of use in amulti-parametric assay for protein expression levels of several CAMs simultaneously, combined with analysis of the cellular localization of the chosen adhesion molecules through time-resolved luminescence microscopy inspection.
Resumo:
Att övervaka förekomsten av giftiga komponenter i naturliga vattendrag är nödvändigt för människans välmående. Eftersom halten av föroreningar i naturens ekosystem bör hållas möjligast låg, pågår en ständig jakt efter kemiska analysmetoder med allt lägre detektionsgränser. I dagens läge görs miljöanalyser med dyr och sofistikerad instrumentering som kräver mycket underhåll. Jonselektiva elektroder har flera goda egenskaper som t.ex. bärbarhet, låg energiförbrukning, och dessutom är de relativt kostnadseffektiva. Att använda jonselektiva elektroder vid miljöanalyser är möjligt om deras känslighetsområde kan utvidgas genom att sänka deras detektionsgränser. För att sänka detektionsgränsen för Pb(II)-selektiva elektroder undersöktes olika typer av jonselektiva membran som baserades på polyakrylat-kopolymerer, PVC och PbS/Ag2S. Fast-fas elektroder med membran av PbS/Ag2S är i allmänhet enklare och mer robusta än konventionella elektroder vid spårämnesanalys av joniska föroreningar. Fast-fas elektrodernas detektionsgräns sänktes i detta arbete med en nyutvecklad galvanostatisk polariseringsmetod och de kunde sedan framgångsrikt användas för kvantitativa bestämningar av bly(II)-halter i miljöprov som hade samlats in i den finska skärgården nära tidigare industriområden. Analysresultaten som erhölls med jonselektiva elektroder bekräftades med andra analytiska metoder. Att sänka detektionsgränsen m.hj.a. den nyutvecklade polariseringsmetoden möjliggör bestämning av låga och ultra-låga blyhalter som inte kunde nås med klassisk potentiometri. Den verkliga fördelen med att använda dessa blyselektiva elektroder är möjligheten att utföra mätningar i obehandlade miljöprov trots närvaron av fasta partiklar vilket inte är möjligt att göra med andra analysmetoder. Jag väntar mig att den nyutvecklade polariseringsmetoden kommer att sätta en trend i spårämnesanalys med jonselektiva elektroder.
Resumo:
Transportation and warehousing are large and growing sectors in the society, and their efficiency is of high importance. Transportation also has a large share of global carbondioxide emissions, which are one the leading causes of anthropogenic climate warming. Various countries have agreed to decrease their carbon emissions according to the Kyoto protocol. Transportation is the only sector where emissions have steadily increased since the 1990s, which highlights the importance of transportation efficiency. The efficiency of transportation and warehousing can be improved with the help of simulations, but models alone are not sufficient. This research concentrates on the use of simulations in decision support systems. Three main simulation approaches are used in logistics: discrete-event simulation, systems dynamics, and agent-based modeling. However, individual simulation approaches have weaknesses of their own. Hybridization (combining two or more approaches) can improve the quality of the models, as it allows using a different method to overcome the weakness of one method. It is important to choose the correct approach (or a combination of approaches) when modeling transportation and warehousing issues. If an inappropriate method is chosen (this can occur if the modeler is proficient in only one approach or the model specification is not conducted thoroughly), the simulation model will have an inaccurate structure, which in turn will lead to misleading results. This issue can further escalate, as the decision-maker may assume that the presented simulation model gives the most useful results available, even though the whole model can be based on a poorly chosen structure. In this research it is argued that simulation- based decision support systems need to take various issues into account to make a functioning decision support system. The actual simulation model can be constructed using any (or multiple) approach, it can be combined with different optimization modules, and there needs to be a proper interface between the model and the user. These issues are presented in a framework, which simulation modelers can use when creating decision support systems. In order for decision-makers to fully benefit from the simulations, the user interface needs to clearly separate the model and the user, but at the same time, the user needs to be able to run the appropriate runs in order to analyze the problems correctly. This study recommends that simulation modelers should start to transfer their tacit knowledge to explicit knowledge. This would greatly benefit the whole simulation community and improve the quality of simulation-based decision support systems as well. More studies should also be conducted by using hybrid models and integrating simulations with Graphical Information Systems.