928 resultados para BENCHMARK
Resumo:
This project is based on the theme of capacity-building in social organisations to improve their impact readiness, which is the predictability of delivering intended outcomes. All organisations which have a social mission, non-profit or for-profit, will be considered to fall within the social sector for the purpose of this work. The thesis will look at (i) what is impact readiness and what are the considerations for building impact readiness in social organisations, (ii) what is the international benchmark in measuring and building impact readiness, (iii) understand the impact readiness of Portuguese social organisations and the supply of capacity building for social impact in Portugal currently, and (iv) provide recommendations on the design of a framework for capacity building for impact readiness adapted to the Portuguese context. This work is of particular relevance to the Social Investment Laboratory, which is a sponsor of this project, in its policy work as part of the Portuguese Social Investment Taskforce (the “Taskforce”). This in turn will inform its contribution to the set-up of Portugal Inovação Social, a wholesaler catalyst entity of social innovation and social investment in the country, launched in early 2015. Whilst the output of this work will be set a recommendations for wider application for capacity-building programmes in Portugal, Portugal Inovação Social will also clearly have a role in coordinating the efforts of market players – foundations, corporations, public sector and social organisations – in implementing these recommendations. In addition, the findings of this report could have relevance to other countries seeking to design capacity building frameworks in their local markets and to any impact-driven organisations with an interest in enhancing the delivery of impact within their work.
Resumo:
Therapeutic drug monitoring (TDM) aims to optimize treatments by individualizing dosage regimens based on the measurement of blood concentrations. Dosage individualization to maintain concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculations currently represent the gold standard TDM approach but require computation assistance. In recent decades computer programs have been developed to assist clinicians in this assignment. The aim of this survey was to assess and compare computer tools designed to support TDM clinical activities. The literature and the Internet were searched to identify software. All programs were tested on personal computers. Each program was scored against a standardized grid covering pharmacokinetic relevance, user friendliness, computing aspects, interfacing and storage. A weighting factor was applied to each criterion of the grid to account for its relative importance. To assess the robustness of the software, six representative clinical vignettes were processed through each of them. Altogether, 12 software tools were identified, tested and ranked, representing a comprehensive review of the available software. Numbers of drugs handled by the software vary widely (from two to 180), and eight programs offer users the possibility of adding new drug models based on population pharmacokinetic analyses. Bayesian computation to predict dosage adaptation from blood concentration (a posteriori adjustment) is performed by ten tools, while nine are also able to propose a priori dosage regimens, based only on individual patient covariates such as age, sex and bodyweight. Among those applying Bayesian calculation, MM-USC*PACK© uses the non-parametric approach. The top two programs emerging from this benchmark were MwPharm© and TCIWorks. Most other programs evaluated had good potential while being less sophisticated or less user friendly. Programs vary in complexity and might not fit all healthcare settings. Each software tool must therefore be regarded with respect to the individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Computer-assisted TDM is gaining growing interest and should further improve, especially in terms of information system interfacing, user friendliness, data storage capability and report generation.
Resumo:
Next-generation sequencing (NGS) technologies have become the standard for data generation in studies of population genomics, as the 1000 Genomes Project (1000G). However, these techniques are known to be problematic when applied to highly polymorphic genomic regions, such as the human leukocyte antigen (HLA) genes. Because accurate genotype calls and allele frequency estimations are crucial to population genomics analyses, it is important to assess the reliability of NGS data. Here, we evaluate the reliability of genotype calls and allele frequency estimates of the single-nucleotide polymorphisms (SNPs) reported by 1000G (phase I) at five HLA genes (HLA-A, -B, -C, -DRB1, and -DQB1). We take advantage of the availability of HLA Sanger sequencing of 930 of the 1092 1000G samples and use this as a gold standard to benchmark the 1000G data. We document that 18.6% of SNP genotype calls in HLA genes are incorrect and that allele frequencies are estimated with an error greater than ±0.1 at approximately 25% of the SNPs in HLA genes. We found a bias toward overestimation of reference allele frequency for the 1000G data, indicating mapping bias is an important cause of error in frequency estimation in this dataset. We provide a list of sites that have poor allele frequency estimates and discuss the outcomes of including those sites in different kinds of analyses. Because the HLA region is the most polymorphic in the human genome, our results provide insights into the challenges of using of NGS data at other genomic regions of high diversity.
Resumo:
Työn lähtökohtana on vuonna 2009 case-yksikölle luotu stage-gate-malli, jonka ymmärtäminen ja käyttö ei ole ollut toivotulla tasolla. Asenteet ja perehdytys tämän mallin käyttöön ovat aiheuttaneet sen, että osa yksikön henkilöstöstä ei tunnista mallin käyttöä omassa työssään, vaikka sitä käytetään jokaisessa tuotekehitysprojektissa. Työn tavoitteena on kuvata case-yksikön stage-gate-mallin etenemistä selkeämmin, jotta se saataisiin tehokkaammin käyttöön koko henkilöstölle, sekä tutkia ovatko eri toimintojen roolitukset mallin mukaan optimaaliset. Tavoitteena on myös tutkia kuinka tiedon tulisi kulkea toimintojen välillä, jotta se olisi mahdollisimman tehokasta. Työtä varten on kerätty kirjallisuusmateriaalia teorian pohjustamiseksi ja haastateltu 14 yksikön toimihenkilöä nykyisten ongelmakohtien kartoittamiseksi. Lisäksi työhön liittyen tehtiin benchmark-haastattelu mahdollisimman lähellä case-yksikön liiketoimintaa olevasta saman yrityksen yksiköstä. Yksikölle luodun stage-gate-mallin tuntemus oli haastattelujen perusteella suhteellisen heikolla tasolla. Vain noin puolet tunnisti sen jokapäiväisessä työskentelyssään. Ongelmakohtia mallin käyttämiseen liittyen ilmeni paljon ja niistä iso osa kohdistui myyntiin, markkinointiin sekä projektipäällikön toimintaan. Havaittujen ongelmien perusteella yksikön roolituksista tehtiin selventävä kaavio, jotta jokaisen on helpompi hahmottaa prosessia. Lisäksi henkilöstön osallistamiseen ja tiedonjakamiseen annettiin kehitysehdotuksia. Näin ollen koko henkilöstö saadaan ymmärtämään mallin vaiheet ja tärkeimpänä, ymmärtämään oman roolinsa kussakin stage-gate-mallin vaiheessa.
Resumo:
The construction of adenovirus vectors for cloning and foreign gene expression requires packaging cell lines that can complement missing viral functions caused by sequence deletions and/or replacement with foreign DNA sequences. In this study, packaging cell lines were designed to provide in trans the missing bovine adenovirus functions, so that recombinant viruses could be generated. Fetal bovine kidney and lUng cells, acquired at the trimester term from a pregnant cow, were tranfected with both digested wild type BAV2 genomic DNA and pCMV-EI. The plasmid pCMV-EI was specifically constructed to express El of BAV2 under the control of the cytomegalovirus enhancer/promoter (CMV). Selection for "true" transformants by continuous passaging showed no success in isolating immortalised cells, since the cells underwent crisis resulting in complete cell death. Moreover, selection for G418 resistance, using the same cells, also did not result in the isolation of an immortalised cell line and the same culture-collapse event was observed. The lack of success in establishing an immortalised cell line from fetal tissue prompted us to transfect a pre-established cell line. We began by transfecting MDBK (Mardin-Dardy bovine kidney) cells with pCMV-El-neo, which contain the bacterial selectable marker neo gene. A series of MDBK-derived cell lines, that constitutively express bovine adenoviral (BAV) early region 1 (El), were then isolated. Cells selected for resistance to the drug G418 were isolated collectively for full characterisation to assess their suitability as packaging cell lines. Individual colonies were isolated by limiting dilution and further tested for El expression and efficiency of DNA uptake. Two cell lines, L-23 and L-24, out of 48 generated foci tested positive for £1 expression using Northern Blot analysis. DNA uptake studies, using both lipofectamine and calcium phosphate methods, were performed to compare these cells, their parental MDBK cells, 8 and the unrelated human 293 cells as a benchmark. The results revealed that the new MDBKderived clones were no more efficient than MDBK cells in the transient expression of transfected DNA and that they were inferior to 293 cells, when using lacZ as the reporter gene. In view of the inherently poor transfection efficiency of MDBK cells and their derivatives, a number of other bovine cells were investigated for their potential as packaging cells. The cell line CCL40 was chosen for its high efficiency in DNA uptake and subsequently transfected with the plasmid vector pCMV El-neo. By selection with the drug G418, two cell lines were isolated, ProCell 1 and ProCell 2. These cell lines were tested for El expression, permissivity to BAV2 and DNA uptake efficiency, revealing a DNA uptake efficiency of 37 % , comparable to that of CCL40. Attempts to rescue BAV2 mutants carrying the lacZ gene in place of £1 or £3 were carried out by co-transfecting wild type viral DNA with either the plasmid pdlElE-Z (which contains BAV2 sequences from 0% to 40.4% with the lacZ gene in place of the £1 region from 1.1% to 8.25%) or with the plasmid pdlE3-5-Z (which contains BAV2 sequences from 64.8% to 100% with the lacZ gene in place of the E3 region from 75.8% to 81.4%). These cotransfections did not result in the generation of a viral mutant. The lack of mutant generation was thought to be caused by the relative inefficiency ofDNA uptake. Consequently, cosBAV2, a cosmid vector carrying the BAV2 genome, was modified to carry the neo reporter gene in place of the £3 region from 75.8% to 81.4%. The use of a single cosmid vector earring the whole genome would eliminate the need for homologous recombination in order to generate a viral vector. Unfortunately, the transfection of cosBAV2- neo also did not result in the generation of a viral mutant. This may have been caused by the size of the £3 deletion, where excess sequences that are essential to the virus' survival might have been deleted. As an extension to this study, the spontaneous E3 deletion, accidently discovered in our viral stock, could be used as site of foreign gene insertion.
Resumo:
In 2007, Barry Bonds hit his 75 6th home run, breaking Hank Aaron's all-time record for most home runs in a Major League career. While it would be expected that such an accomplishment would induce unending praise and adulationfor the new record-holder, Bonds did not receive the treatment typically reserved for a beloved baseball hero. The purpose of this thesis is to assess media representations of the 2007 home run chase in order to shed light upon the factors which led to the mixed representations which accompanied BOlTds ' assault on Aaron's record. Drawingfrom Roland Barthes ' concept of myth, this thesis proposes that Bonds was portrayed in predominantly negative ways because he was seen as failing to embody the values of baseball's mythology. Using a qualitative content analysis of three major American newspapers, this thesis examines portrayals of Bonds and how he was shown both to represent and oppose elements from baseball's mythology, such as youth, and a distant, agrarian past. Recognizing the ways in which baseball is associated with American life, the media representations of Bonds are also evaluated to discern whether he was portrayed as personifYing a distinctly American set of values. The results indicate that, in media coverage of the 2007 home run chase, Bonds was depicted as a player of many contradictions. Most commonly, Bonds' athletic ability and career achievements were contrasted with unflattering descriptions of his character, including discussions of his alleged use of performance-enhancing substances. However, some coverage portrayed Bonds as embodying baseball myth. The findings contribute to an appreciation of the importance of historical context in examining media representations. This understanding is enhanced by an analysis of a selection of articles on Mark McGwire 's record-breaking season in 1998, and careful consideration of, and comparison to, the context under which Bonds performed in 2007. Findings are also shown to support the contemporary existence of a strong American baseball mythology. That Bonds is both condemned for failing to uphold the mythology and praised for personifYing it suggests that the values seen as inherent to baseball continue to act as an American cultural benchmark.
Resumo:
The present thesis examines the determinants of the bankruptcy protection duration for Canadian firms. Using a sample of Canadian firms that filed for bankruptcy protection between the calendar years 1992 and 2009, we fmd that the firm age, the industry adjusted operating margin, the default spread, the industrial production growth rate or the interest rate are influential factors on determining the length of the protection period. Older firms tend to stay longer under protection from creditors. As older firms have more complicated structures and issues to settle, the risk of exiting soon the protection (the hazard rate) is small. We also find that firms that perform better than their benchmark as measured by the industry they belong to, tend to leave quickly the bankruptcy protection state. We conclude that the fate of relatively successful companies is determined faster. Moreover, we report that it takes less time to achieve a final solution to firms under bankrupt~y when the default spread is low or when the appetite for risk is high. Conversely, during periods of high default spreads and flight for quality, it takes longer time to resolve the bankruptcy issue. This last finding may suggest that troubled firms should place themselves under protection when spreads are low. However, this ignores the endogeneity issue: high default spread may cause and incidentally reflect higher bankruptcy rates in the economy. Indeed, we find that bankruptcy protection is longer during economic downturns. We explain this relation by the natural increase in default rate among firms (and individuals) during economically troubled times. Default spreads are usually larger during these harsh periods as investors become more risk averse since their wealth shrinks. Using a Log-logistic hazard model, we also fmd that firms that file under the Companies' Creditors Arrangement Act (CCAA) protection spend longer time restructuring than firms that filed under the Bankruptcy and Insolvency Act (BIA). As BIA is more statutory and less flexible, solutions can be reached faster by court orders.
Resumo:
The main focus of this thesis is to evaluate and compare Hyperbalilearning algorithm (HBL) to other learning algorithms. In this work HBL is compared to feed forward artificial neural networks using back propagation learning, K-nearest neighbor and 103 algorithms. In order to evaluate the similarity of these algorithms, we carried out three experiments using nine benchmark data sets from UCI machine learning repository. The first experiment compares HBL to other algorithms when sample size of dataset is changing. The second experiment compares HBL to other algorithms when dimensionality of data changes. The last experiment compares HBL to other algorithms according to the level of agreement to data target values. Our observations in general showed, considering classification accuracy as a measure, HBL is performing as good as most ANn variants. Additionally, we also deduced that HBL.:s classification accuracy outperforms 103's and K-nearest neighbour's for the selected data sets.
Resumo:
Complex networks can arise naturally and spontaneously from all things that act as a part of a larger system. From the patterns of socialization between people to the way biological systems organize themselves, complex networks are ubiquitous, but are currently poorly understood. A number of algorithms, designed by humans, have been proposed to describe the organizational behaviour of real-world networks. Consequently, breakthroughs in genetics, medicine, epidemiology, neuroscience, telecommunications and the social sciences have recently resulted. The algorithms, called graph models, represent significant human effort. Deriving accurate graph models is non-trivial, time-intensive, challenging and may only yield useful results for very specific phenomena. An automated approach can greatly reduce the human effort required and if effective, provide a valuable tool for understanding the large decentralized systems of interrelated things around us. To the best of the author's knowledge this thesis proposes the first method for the automatic inference of graph models for complex networks with varied properties, with and without community structure. Furthermore, to the best of the author's knowledge it is the first application of genetic programming for the automatic inference of graph models. The system and methodology was tested against benchmark data, and was shown to be capable of reproducing close approximations to well-known algorithms designed by humans. Furthermore, when used to infer a model for real biological data the resulting model was more representative than models currently used in the literature.
Resumo:
Emerging markets have received wide attention from investors around the globe because of their return potential and risk diversification. This research examines the selection and timing performance of Canadian mutual funds which invest in fixed-income and equity securities in emerging markets. We use (un)conditional two- and five-factor benchmark models that accommodate the dynamics of returns in emerging markets. We also adopt the cross-sectional bootstrap methodology to distinguish between ‘skill’ and ‘luck’ for individual funds. All the tests are conducted using a comprehensive data set of bond and equity emerging funds over the period of 1989-2011. The risk-adjusted measures of performance are estimated using the least squares method with the Newey-West adjustment for standard errors that are robust to conditional heteroskedasticity and autocorrelation. The performance statistics of the emerging funds before (after) management-related costs are insignificantly positive (significantly negative). They are sensitive to the chosen benchmark model and conditional information improves selection performance. The timing statistics are largely insignificant throughout the sample period and are not sensitive to the benchmark model. Evidence of timing and selecting abilities is obtained in a small number of funds which is not sensitive to the fees structure. We also find evidence that a majority of individual funds provide zero (very few provide positive) abnormal return before fees and a significantly negative return after fees. At the negative end of the tail of performance distribution, our resampling tests fail to reject the role of bad luck in the poor performance of funds and we conclude that most of them are merely ‘unlucky’.
Resumo:
Ordered gene problems are a very common classification of optimization problems. Because of their popularity countless algorithms have been developed in an attempt to find high quality solutions to the problems. It is also common to see many different types of problems reduced to ordered gene style problems as there are many popular heuristics and metaheuristics for them due to their popularity. Multiple ordered gene problems are studied, namely, the travelling salesman problem, bin packing problem, and graph colouring problem. In addition, two bioinformatics problems not traditionally seen as ordered gene problems are studied: DNA error correction and DNA fragment assembly. These problems are studied with multiple variations and combinations of heuristics and metaheuristics with two distinct types or representations. The majority of the algorithms are built around the Recentering- Restarting Genetic Algorithm. The algorithm variations were successful on all problems studied, and particularly for the two bioinformatics problems. For DNA Error Correction multiple cases were found with 100% of the codes being corrected. The algorithm variations were also able to beat all other state-of-the-art DNA Fragment Assemblers on 13 out of 16 benchmark problem instances.
Resumo:
The purpose of this research was to examine the ways in which individuals with mental illness create a life of purpose, satisfaction and meaning. The data supported the identification of four common themes: (1) the power of leisure in activation, (2) the power of leisure in resiliency, (3) the power of leisure in identity and (4) the power of leisure in reducing struggle. Through an exploration of the experience of having a mental illness, this project supports that leisure provides therapeutic benefits that transcend through negative life events. In addition, this project highlights the individual nature of recovery as a process of self-discovery. Through the creation of a visual model, this project provides a benchmark for how a small group of individuals have experienced living well with mental illness. As such, this work brings new thought to the growing body of mental health and leisure studies literature.
Resumo:
Experimental Extended X-ray Absorption Fine Structure (EXAFS) spectra carry information about the chemical structure of metal protein complexes. However, pre- dicting the structure of such complexes from EXAFS spectra is not a simple task. Currently methods such as Monte Carlo optimization or simulated annealing are used in structure refinement of EXAFS. These methods have proven somewhat successful in structure refinement but have not been successful in finding the global minima. Multiple population based algorithms, including a genetic algorithm, a restarting ge- netic algorithm, differential evolution, and particle swarm optimization, are studied for their effectiveness in structure refinement of EXAFS. The oxygen-evolving com- plex in S1 is used as a benchmark for comparing the algorithms. These algorithms were successful in finding new atomic structures that produced improved calculated EXAFS spectra over atomic structures previously found.
Characterizing Dynamic Optimization Benchmarks for the Comparison of Multi-Modal Tracking Algorithms
Resumo:
Population-based metaheuristics, such as particle swarm optimization (PSO), have been employed to solve many real-world optimization problems. Although it is of- ten sufficient to find a single solution to these problems, there does exist those cases where identifying multiple, diverse solutions can be beneficial or even required. Some of these problems are further complicated by a change in their objective function over time. This type of optimization is referred to as dynamic, multi-modal optimization. Algorithms which exploit multiple optima in a search space are identified as niching algorithms. Although numerous dynamic, niching algorithms have been developed, their performance is often measured solely on their ability to find a single, global optimum. Furthermore, the comparisons often use synthetic benchmarks whose landscape characteristics are generally limited and unknown. This thesis provides a landscape analysis of the dynamic benchmark functions commonly developed for multi-modal optimization. The benchmark analysis results reveal that the mechanisms responsible for dynamism in the current dynamic bench- marks do not significantly affect landscape features, thus suggesting a lack of representation for problems whose landscape features vary over time. This analysis is used in a comparison of current niching algorithms to identify the effects that specific landscape features have on niching performance. Two performance metrics are proposed to measure both the scalability and accuracy of the niching algorithms. The algorithm comparison results demonstrate the algorithms best suited for a variety of dynamic environments. This comparison also examines each of the algorithms in terms of their niching behaviours and analyzing the range and trade-off between scalability and accuracy when tuning the algorithms respective parameters. These results contribute to the understanding of current niching techniques as well as the problem features that ultimately dictate their success.
Resumo:
As a result of mutation in genes, which is a simple change in our DNA, we will have undesirable phenotypes which are known as genetic diseases or disorders. These small changes, which happen frequently, can have extreme results. Understanding and identifying these changes and associating these mutated genes with genetic diseases can play an important role in our health, by making us able to find better diagnosis and therapeutic strategies for these genetic diseases. As a result of years of experiments, there is a vast amount of data regarding human genome and different genetic diseases that they still need to be processed properly to extract useful information. This work is an effort to analyze some useful datasets and to apply different techniques to associate genes with genetic diseases. Two genetic diseases were studied here: Parkinson’s disease and breast cancer. Using genetic programming, we analyzed the complex network around known disease genes of the aforementioned diseases, and based on that we generated a ranking for genes, based on their relevance to these diseases. In order to generate these rankings, centrality measures of all nodes in the complex network surrounding the known disease genes of the given genetic disease were calculated. Using genetic programming, all the nodes were assigned scores based on the similarity of their centrality measures to those of the known disease genes. Obtained results showed that this method is successful at finding these patterns in centrality measures and the highly ranked genes are worthy as good candidate disease genes for being studied. Using standard benchmark tests, we tested our approach against ENDEAVOUR and CIPHER - two well known disease gene ranking frameworks - and we obtained comparable results.