821 resultados para Computer Diagnostics
Resumo:
CHARGE syndrome, Sotos syndrome and 3p deletion syndrome are examples of rare inherited syndromes that have been recognized for decades but for which the molecular diagnostics only have been made possible by recent advances in genomic research. Despite these advances, development of diagnostic tests for rare syndromes has been hindered by diagnostic laboratories having limited funds for test development, and their prioritization of tests for which a (relatively) high demand can be expected. In this study, the molecular diagnostic tests for CHARGE syndrome and Sotos syndrome were developed, resulting in their successful translation into routine diagnostic testing in the laboratory of Medical Genetics (UTUlab). In the CHARGE syndrome group, mutation was identified in 40.5% of the patients and in the Sotos syndrome group, in 34%, reflecting the use of the tests in routine diagnostics in differential diagnostics. In CHARGE syndrome, the low prevalence of structural aberrations was also confirmed. In 3p deletion syndrome, it was shown that small terminal deletions are not causative for the syndrome, and that testing with arraybased analysis provides a reliable estimate of the deletion size but benign copy number variants complicate result interpretation. During the development of the tests, it was discovered that finding an optimal molecular diagnostic strategy for a given syndrome is always a compromise between the sensitivity, specificity and feasibility of applying a new method. In addition, the clinical utility of the test should be considered prior to test development: sometimes a test performing well in a laboratory has limited utility for the patient, whereas a test performing poorly in the laboratory may have a great impact on the patient and their family. At present, the development of next generation sequencing methods is changing the concept of molecular diagnostics of rare diseases from single tests towards whole-genome analysis.
Resumo:
The question of the trainability of executive functions and the impact of such training on related cognitive skills has stirred considerable research interest. Despite a number of studies investigating this, the question has not yet been solved. The general aim of this thesis was to investigate two very different types of training of executive functions: laboratory-based computerized training (Studies I-III) and realworld training through bilingualism (Studies IV-V). Bilingualism as a kind of training of executive functions is based on the idea that managing two languages requires executive resources, and previous studies have suggested a bilingual advantage in executive functions. Three executive functions were studied in the present thesis: updating of working memory (WM) contents, inhibition of irrelevant information, and shifting between tasks and mental sets. Studies I-III investigated the effects of computer-based training of WM updating (Study I), inhibition (Study II), and set shifting (Study III) in healthy young adults. All studies showed increased performance on the trained task. More importantly, improvement on an untrained task tapping the trained executive function (near transfer) was seen in Study I and II. None of the three studies showed improvement on untrained tasks tapping some other cognitive function (far transfer) as a result of training. Study I also used PET to investigate the effects of WM updating training on a neurotransmitter closely linked to WM, namely dopamine. The PET results revealed increased striatal dopamine release during WM updating performance as a result of training. Study IV investigated the ability to inhibit task-irrelevant stimuli in bilinguals and monolinguals by using a dichotic listening task. The results showed that the bilinguals exceeded the monolinguals in inhibiting task-irrelevant information. Study V introduced a new, complementary research approach to study the bilingual executive advantage and its underlying mechanisms. To circumvent the methodological problems related to natural groups design, this approach focuses only on bilinguals and examines whether individual differences in bilingual behavior correlate with executive task performances. Using measures that tap the three above-entioned executive functions, the results suggested that more frequent language switching was associated with better set shifting skills, and earlier acquisition of the second language was related to better inhibition skills. In conclusion, the present behavioral results showed that computer-based training of executive functions can improve performance on the trained task and on closely related tasks, but does not yield a more general improvement of cognitive skills. Moreover, the functional neuroimaging results reveal that WM training modulates striatal dopaminergic function, speaking for training-induced neural plasticity in this important neurotransmitter system. With regard to bilingualism, the results provide further support to the idea that bilingualism can enhance executive functions. In addition, the new complementary research approach proposed here provides some clues as to which aspects of everyday bilingual behavior may be related to the advantage in executive functions in bilingual individuals.
Resumo:
Communication, the flow of ideas and information between individuals in a social context, is the heart of educational experience. Constructivism and constructivist theories form the foundation for the collaborative learning processes of creating and sharing meaning in online educational contexts. The Learning and Collaboration in Technology-enhanced Contexts (LeCoTec) course comprised of 66 participants drawn from four European universities (Oulu, Turku, Ghent and Ramon Llull). These participants were split into 15 groups with the express aim of learning about computer-supported collaborative learning (CSCL). The Community of Inquiry model (social, cognitive and teaching presences) provided the content and tools for learning and researching the collaborative interactions in this environment. The sampled comments from the collaborative phase were collected and analyzed at chain-level and group-level, with the aim of identifying the various message types that sustained high learning outcomes. Furthermore, the Social Network Analysis helped to view the density of whole group interactions, as well as the popular and active members within the highly collaborating groups. It was observed that long chains occur in groups having high quality outcomes. These chains were also characterized by Social, Interactivity, Administrative and Content comment-types. In addition, high outcomes were realized from the high interactive cases and high-density groups. In low interactive groups, commenting patterned around the one or two central group members. In conclusion, future online environments should support high-order learning and develop greater metacognition and self-regulation. Moreover, such an environment, with a wide variety of problem solving tools, would enhance interactivity.
Resumo:
This paper investigates defect detection methodologies for rolling element bearings through vibration analysis. Specifically, the utility of a new signal processing scheme combining the High Frequency Resonance Technique (HFRT) and Adaptive Line Enhancer (ALE) is investigated. The accelerometer is used to acquire data for this analysis, and experimental results have been obtained for outer race defects. Results show the potential effectiveness of the signal processing technique to determine both the severity and location of a defect. The HFRT utilizes the fact that much of the energy resulting from a defect impact manifests itself in the higher resonant frequencies of a system. Demodulation of these frequency bands through use of the envelope technique is then employed to gain further insight into the nature of the defect while further increasing the signal to noise ratio. If periodic, the defect frequency is then present in the spectra of the enveloped signal. The ALE is used to enhance the envelope spectrum by reducing the broadband noise. It provides an enhanced envelope spectrum with clear peaks at the harmonics of a characteristic defect frequency. It is implemented by using a delayed version of the signal and the signal itself to decorrelate the wideband noise. This noise is then rejected by the adaptive filter that is based upon the periodic information in the signal. Results have been obtained for outer race defects. They show the effectiveness of the methodology to determine both the severity and location of a defect. In two instances, a linear relationship between signal characteristics and defect size is indicated.
Resumo:
This work presents the implementation and comparison of three different techniques of three-dimensional computer vision as follows: • Stereo vision - correlation between two 2D images • Sensorial fusion - use of different sensors: camera 2D + ultrasound sensor (1D); • Structured light The computer vision techniques herein presented took into consideration the following characteristics: • Computational effort ( elapsed time for obtain the 3D information); • Influence of environmental conditions (noise due to a non uniform lighting, overlighting and shades); • The cost of the infrastructure for each technique; • Analysis of uncertainties, precision and accuracy. The option of using the Matlab software, version 5.1, for algorithm implementation of the three techniques was due to the simplicity of their commands, programming and debugging. Besides, this software is well known and used by the academic community, allowing the results of this work to be obtained and verified. Examples of three-dimensional vision applied to robotic assembling tasks ("pick-and-place") are presented.
Resumo:
The number of molecular diagnostic assays has increased tremendously in recent years.Nucleic acid diagnostic assays have been developed, especially for the detection of human pathogenic microbes and genetic markers predisposing to certain diseases. Closed-tube methods are preferred because they are usually faster and easier to perform than heterogenous methods and in addition, target nucleic acids are commonly amplified leading to risk of contamination of the following reactions by the amplification product if the reactions are opened. The present study introduces a new closed-tube switchable complementation probes based PCR assay concept where two non-fluorescent probes form a fluorescent lanthanide chelate complex in the presence of the target DNA. In this dual-probe PCR assay method one oligonucleotide probe carries a non-fluorescent lanthanide chelate and another probe a light absorbing antenna ligand. The fluorescent lanthanide chelate complex is formed only when the non-fluorescent probes are hybridized to adjacent positions into the target DNA bringing the reporter moieties in close proximity. The complex is formed by self-assembled lanthanide chelate complementation where the antenna ligand is coordinated to the lanthanide ion captured in the chelate. The complementation probes based assays with time-resolved fluorescence measurement showed low background signal level and hence, relatively high nucleic acid detection sensitivity (low picomolar target concentration). Different lanthanide chelate structures were explored and a new cyclic seven dentate lanthanide chelate was found suitable for complementation probe method. It was also found to resist relatively high PCR reaction temperatures, which was essential for the PCR assay applications. A seven-dentate chelate with two unoccupied coordination sites must be used instead of a more stable eight- or nine-dentate chelate because the antenna ligand needs to be coordinated to the free coordination sites of the lanthanide ion. The previously used linear seven-dentate lanthanide chelate was found to be unstable in PCR conditions and hence, the new cyclic chelate was needed. The complementation probe PCR assay method showed high signal-to-background ratio up to 300 due to a low background fluorescence level and the results (threshold cycles) in real-time PCR were reached approximately 6 amplification cycles earlier compared to the commonly used FRET-based closed-tube PCR method. The suitability of the complementation probe method for different nucleic acid assay applications was studied. 1) A duplex complementation probe C. trachomatis PCR assay with a simple 10-minute urine sample preparation was developed to study suitability of the method for clinical diagnostics. The performance of the C. trachomatis assay was equal to the commercial C. trachomatis nucleic acid amplification assay containing more complex sample preparation based on DNA extraction. 2) A PCR assay for the detection of HLA-DQA1*05 allele, that is used to predict the risk of type 1 diabetes, was developed to study the performance of the method in genotyping. A simple blood sample preparation was used where the nucleic acids were released from dried blood sample punches using high temperature and alkaline reaction conditions. The complementation probe HLA-DQA1*05 PCR assay showed good genotyping performance correlating 100% with the routinely used heterogenous reference assay. 3) To study the suitability of the complementation probe method for direct measurement of the target organism, e.g., in the culture media, the complementation probes were applied to amplificationfree closed-tube bacteriophage quantification by measuring M13 bacteriophage ssDNA. A low picomolar bacteriophage concentration was detected in a rapid 20- minute assay. The assay provides a quick and reliable alternative to the commonly used and relatively unreliable UV-photometry and time-consuming culture based bacteriophage detection methods and indicates that the method could also be used for direct measurement of other micro-organisms. The complementation probe PCR method has a low background signal level leading to a high signal-to-background ratio and relatively sensitive nucleic acid detection. The method is compatible with simple sample preparation and it was shown to tolerate residues of urine, blood, bacteria and bacterial culture media. The common trend in nucleic acid diagnostics is to create easy-to-use assays suitable for rapid near patient analysis. The complementation probe PCR assays with a brief sample preparation should be relatively easy to automate and hence, would allow the development of highperformance nucleic acid amplification assays with a short overall assay time.
Resumo:
Conventional diagnostics tests and technologies typically allow only a single analysis and result per test. The aim of this study was to propose robust and multiplex array-inwell test platforms based on oligonucleotide and protein arrays combining the advantages of simple instrumentation and upconverting phosphor (UCP) reporter technology. The UCPs are luminescent lanthanide-doped crystals that have a unique capability to convert infrared radiation into visible light. No autofluorescence is produced from the sample under infrared excitation enabling the development of highly sensitive assays. In this study, an oligonucleotide array-in-well hybridization assay was developed for the detection and genotyping of human adenoviruses. The study provided a verification of the advantages and potential of the UCP-based reporter technology in multiplex assays as well as anti-Stokes photoluminescence detection with a new anti- Stokes photoluminescence imager. The developed assay was technically improved and used to detect and genotype adenovirus types from clinical specimens. Based on the results of the epidemiological study, an outbreak of adenovirus type B03 was observed in the autumn of 2010. A quantitative array-in-well immunoassay was developed for three target analytes (prostate specific antigen, thyroid stimulating hormone, and luteinizing hormone). In this study, quantitative results were obtained for each analyte and the analytical sensitivities in buffer were in clinically relevant range. Another protein-based array-inwell assay was developed for multiplex serodiagnostics. The developed assay was able to detect parvovirus B19 IgG and adenovirus IgG antibodies simultaneously from serum samples according to reference assays. The study demonstrated that the UCPtechnology is a robust detection method for diverse multiplex imaging-based array-inwell assays.
Resumo:
The use of water-sensitive papers is an important tool for assessing the quality of pesticide application on crops, but manual analysis is laborious and time-consuming. Thus, this study aimed to evaluate and compare the results obtained from four software programs for spray droplet analysis in different scanned images of water-sensitive papers. After spraying, papers with four droplet deposition patterns (varying droplet spectra and densities) were analyzed manually and by means of the following computer programs: CIR, e-Sprinkle, DepositScan and Conta-Gotas. The diameter of the volume and number medians and the number of droplets per target area were studied. There is a strong correlation between the values measured using the different programs and the manual analysis, but there is a great difference between the numerical values measured for the same paper. Thus, it is not advisable to compare results obtained from different programs.
Resumo:
This thesis concentrates on the validation of a generic thermal hydraulic computer code TRACE under the challenges of the VVER-440 reactor type. The code capability to model the VVER-440 geometry and thermal hydraulic phenomena specific to this reactor design has been examined and demonstrated acceptable. The main challenge in VVER-440 thermal hydraulics appeared in the modelling of the horizontal steam generator. The major challenge here is not in the code physics or numerics but in the formulation of a representative nodalization structure. Another VVER-440 specialty, the hot leg loop seals, challenges the system codes functionally in general, but proved readily representable. Computer code models have to be validated against experiments to achieve confidence in code models. When new computer code is to be used for nuclear power plant safety analysis, it must first be validated against a large variety of different experiments. The validation process has to cover both the code itself and the code input. Uncertainties of different nature are identified in the different phases of the validation procedure and can even be quantified. This thesis presents a novel approach to the input model validation and uncertainty evaluation in the different stages of the computer code validation procedure. This thesis also demonstrates that in the safety analysis, there are inevitably significant uncertainties that are not statistically quantifiable; they need to be and can be addressed by other, less simplistic means, ultimately relying on the competence of the analysts and the capability of the community to support the experimental verification of analytical assumptions. This method completes essentially the commonly used uncertainty assessment methods, which are usually conducted using only statistical methods.
Resumo:
At the present time, protein folding is an extremely active field of research including aspects of biology, chemistry, biochemistry, computer science and physics. The fundamental principles have practical applications in the exploitation of the advances in genome research, in the understanding of different pathologies and in the design of novel proteins with special functions. Although the detailed mechanisms of folding are not completely known, significant advances have been made in the understanding of this complex process through both experimental and theoretical approaches. In this review, the evolution of concepts from Anfinsen's postulate to the "new view" emphasizing the concept of the energy landscape of folding is presented. The main rules of protein folding have been established from in vitro experiments. It has been long accepted that the in vitro refolding process is a good model for understanding the mechanisms by which a nascent polypeptide chain reaches its native conformation in the cellular environment. Indeed, many denatured proteins, even those whose disulfide bridges have been disrupted, are able to refold spontaneously. Although this assumption was challenged by the discovery of molecular chaperones, from the amount of both structural and functional information now available, it has been clearly established that the main rules of protein folding deduced from in vitro experiments are also valid in the cellular environment. This modern view of protein folding permits a better understanding of the aggregation processes that play a role in several pathologies, including those induced by prions and Alzheimer's disease. Drug design and de novo protein design with the aim of creating proteins with novel functions by application of protein folding rules are making significant progress and offer perspectives for practical applications in the development of pharmaceuticals and medical diagnostics.
Resumo:
The aim of the present study was to measure full epidermal thickness, stratum corneum thickness, rete length, dermal papilla widening and suprapapillary epidermal thickness in psoriasis patients using a light microscope and computer-supported image analysis. The data obtained were analyzed in terms of patient age, type of psoriasis, total body surface area involvement, scalp and nail involvement, duration of psoriasis, and family history of the disease. The study was conducted on 64 patients and 57 controls whose skin biopsies were examined by light microscopy. The acquired microscopic images were transferred to a computer and measurements were made using image analysis. The skin biopsies, taken from different body areas, were examined for different parameters such as epidermal, corneal and suprapapillary epidermal thickness. The most prominent increase in thickness was detected in the palmar region. Corneal thickness was more pronounced in patients with scalp involvement than in patients without scalp involvement (t = -2.651, P = 0.008). The most prominent increase in rete length was observed in the knees (median: 491 µm, t = 10.117, P = 0.000). The difference in rete length between patients with a positive and a negative family history was significant (t = -3.334, P = 0.03), being 27% greater in psoriasis patients without a family history. The differences in dermal papilla distances among patients were very small. We conclude that microscope-supported thickness measurements provide objective results.
Resumo:
Hur arbetar en framgångsrik programmerare? Uppgifterna att programmera datorspel och att programmera industriella, säkerhetskritiska system verkar tämligen olika. Genom en noggrann empirisk undersökning jämför och kontrasterar avhandlingen dessa två former av programmering och visar att programmering innefattar mer än teknisk förmåga. Med utgångspunkt i hermeneutisk och retorisk teori och med hjälp av både kulturvetenskap och datavetenskap visar avhandlingen att programmerarnas tradition och värderingar är grundläggande för deras arbete, och att båda sorter av programmering kan uppfattas och analyseras genom klassisk texttolkningstradition. Dessutom kan datorprogram betraktas och analyseras med hjälp av klassiska teorier om talproduktion i praktiken - program ses då i detta sammanhang som ett slags yttranden. Allt som allt förespråkar avhandlingen en återkomst till vetenskapens grunder, vilka innebär en ständig och oupphörlig cyklisk rörelse mellan att erfara och att förstå. Detta står i kontrast till en reduktionistisk syn på vetenskapen, som skiljer skarpt mellan subjektivt och objektivt, och på så sätt utgår från möjligheten att uppnå fullständigt vetande. Ofullständigt vetande är tolkandets och hermeneutikens domän. Syftet med avhandlingen är att med hjälp av exempel demonstrera programmeringens kulturella, hermeneutiska och retoriska natur.
Resumo:
As the world becomes more technologically advanced and economies become globalized, computer science evolution has become faster than ever before. With this evolution and globalization come the need for sustainable university curricula that adequately prepare graduates for life in the industry. Additionally, behavioural skills or “soft” skills have become just as important as technical abilities and knowledge or “hard” skills. The objective of this study was to investigate the current skill gap that exists between computer science university graduates and actual industry needs as well as the sustainability of current computer science university curricula by conducting a systematic literature review of existing publications on the subject as well as a survey of recently graduated computer science students and their work supervisors. A quantitative study was carried out with respondents from six countries, mainly Finland, 31 of the responses came from recently graduated computer science professionals and 18 from their employers. The observed trends suggest that a skill gap really does exist particularly with “soft” skills and that many companies are forced to provide additional training to newly graduated employees if they are to be successful at their jobs.
Supplier provided automatic warehouse replenishment solutions in pharmaceutical diagnostics industry