918 resultados para Generation of test processes
Resumo:
We have developed a sensitive resonant four-wave mixing technique based on two-photon parametric four-wave mixing with the addition of a phase matched ''seeder'' field. Generation of the seeder field via the same four-wave mixing process in a high pressure cell enables automatic phase matching to be achieved in a low pressure sample cell. This arrangement facilitates sensitive detection of complex molecular spectra by simply tuning the pump laser. We demonstrate the technique with the detection of nitric oxide down to concentrations more than 4 orders of magnitude below the capability of parametric four-wave mixing alone, with an estimated detection threshold of 10(12) molecules/cm(3).
Resumo:
This study describes a simple method for long-term establishment of human ovarian tumor lines and prediction of T-cell epitopes that could be potentially useful in the generation of tumor-specific cytotoxic T lymphocytes (CTLs), Nine ovarian tumor lines (INT.Ov) were generated from solid primary or metastatic tumors as well as from ascitic fluid, Notably all lines expressed HLA class I, intercellular adhesion molecule-1 (ICAM-1), polymorphic epithelial mucin (PEM) and cytokeratin (CK), but not HLA class II, B7.1 (CD80) or BAGE, While of the 9 lines tested 4 (INT.Ov1, 2, 5 and 6) expressed the folate receptor (FR-alpha) and 6 (INT.Ov1, 2, 5, 6, 7 and 9) expressed the epidermal growth factor receptor (EGFR); MAGE-1 and p185(HER-2/neu) were only found in 2 lines (INT.Ov1 and 2) and GAGE-1 expression in 1 line (INT.Ov2). The identification of class I MHC ligands and T-cell epitopes within protein antigens was achieved by applying several theoretical methods including: 1) similarity or homology searches to MHCPEP; 2) BIMAS and 3) artificial neural network-based predictions of proteins MACE, GAGE, EGFR, p185(HER-2/neu) and FR-alpha expressed in INT.Ov lines, Because of the high frequency of expression of some of these proteins in ovarian cancer and the ability to determine HLA binding peptides efficiently, it is expected that after appropriate screening, a large cohort of ovarian cancer patients may become candidates to receive peptide based vaccines. (C) 1997 Wiley-Liss, Inc.
Resumo:
Speech understanding disorders in the elderly may be due to peripheral or central auditory dysfunctions. Asymmetry of results in dichotic testing increases with age, and may reflect on a lack of inter-hemisphere transmission and cognitive decline. Aim: To investigate auditory processing of aged people with no hearing complaints. Study design: clinical prospective. Materials and Methods: Twenty-two voluntary individuals, aged between 55 and 75 years, were evaluated. They reported no hearing complaints and had maximal auditory thresholds of 40 dB HL until 4 KHz, 80% of minimal speech recognition scores and peripheral symmetry between the ears. We used two kinds of tests: speech in noise and dichotic alternated dissyllables (SSW). Results were compared between males and females, right and left ears and between age groups. Results: There were no significant differences between genders, in both tests. Their Left ears showed worse results, in the competitive condition of SSW. Individuals aged 65 or older had poorer performances than those aged 55 to 64. Conclusion: Central auditory tests showed worse performance with aging. The employment of a dichotic test in the auditory evaluation setting in the elderly may help in the early identification of degenerative processes, which are common among these patients.
Resumo:
Renal drug elimination is determined by glomerular filtration, tubular secretion, and tubular reabsorption. Changes in the integrity of these processes influence renal drug clearance, and these changes may not be detected by conventional measures of renal function such as creatinine clearance. The aim of the current study was to examine the analytic issues needed to develop a cocktail of marker drugs (fluconazole, rac-pindolol, para-aminohippuric acid, sinistrin) to measure simultaneously the mechanisms contributing to renal clearance. High-performance liquid chromatographic methods of analysis for fluconazole, pindolol, para-aminohippuric acid, and creatinine and an enzymatic assay for sinistrin were developed or modified and then validated to allow determination of each of the compounds in both plasma and urine in the presence of all other marker drugs. A pilot clinical study in one volunteer was conducted to ensure that the assays were suitable for quantitating all the marker drugs to the sensitivity and specificity needed to allow accurate determination of individual renal clearances. The performance of all assays (plasma and urine) complied with published validation criteria. All standard curves displayed linearity over the concentration ranges required, with coefficients of correlation greater than 0.99. The precision of the interday and intraday variabilities of quality controls for each marker in plasma and urine were all less than 11.9% for each marker. Recoveries of markers (and internal standards) in plasma and urine were all at least 90%. All markers investigated were shown to be stable when plasma or urine was frozen and thawed. For all the assays developed, there were no interferences from other markers or endogenous substances. In a pilot clinical study, concentrations of all markers could be accurately and reproducibly determined for a sufficient duration of time after administration to calculate accurate renal clearance for each marker. This article presents details of the analytic techniques developed for measuring concentrations of marker drugs for different renal elimination processes administered as a single dose to define the processes contributing to renal drug elimination.
Resumo:
Pasminco Century Mine has developed a geophysical logging system to provide new data for ore mining/grade control and the generation of Short Term Models for mine planning. Previous work indicated the applicability of petrophysical logging for lithology prediction, however, the automation of the method was not considered reliable enough for the development of a mining model. A test survey was undertaken using two diamond drilled control holes and eight percussion holes. All holes were logged with natural gamma, magnetic susceptibility and density. Calibration of the LogTrans auto-interpretation software using only natural gamma and magnetic susceptibility indicated that both lithology and stratigraphy could be predicted. Development of a capability to enforce stratigraphic order within LogTrans increased the reliability and accuracy of interpretations. After the completion of a feasibility program, Century Mine has invested in a dedicated logging vehicle to log blast holes as well as for use in in-fill drilling programs. Future refinement of the system may lead to the development of GPS controlled excavators for mining ore.
Resumo:
Management are keen to maximize the life span of an information system because of the high cost, organizational disruption, and risk of failure associated with the re-development or replacement of an information system. This research investigates the effects that various factors have on an information system's life span by understanding how the factors affect an information system's stability. The research builds on a previously developed two-stage model of information system change whereby an information system is either in a stable state of evolution in which the information system's functionality is evolving, or in a state of revolution, in which the information system is being replaced because it is not providing the functionality expected by its users. A case study surveyed a number of systems within one organization. The aim was to test whether a relationship existed between the base value of the volatility index (a measure of the stability of an information system) and certain system characteristics. Data relating to some 3000 user change requests covering 40 systems over a 10-year period were obtained. The following factors were hypothesized to have significant associations with the base value of the volatility index: language level (generation of language of construction), system size, system age, and the timing of changes applied to a system. Significant associations were found in the hypothesized directions except that the timing of user changes was not associated with any change in the value of the volatility index. Copyright (C) 2002 John Wiley Sons, Ltd.
Resumo:
0na eg rcarmitimcaal tfieca tcuorem opfr ethhee nospieornat iodneafli cdietf i(nAitiCoDn )o ft hthaet frequently co-occurs with Broca’s aphasia is above-chance performance on well-formedness judgment tasks for many syntactic constructions, but impaired performance where syntactic binding of traces to their antecedents occurs. However, the methodologies used to establish this aspect of the performance profile of the ACD have been predominantly offline. Offline well-formedness tasks entail extralinguistic processing (e.g. perception, attention, short-term memory, conscious reflection) in varying amounts and the influence of such processes on parsing mechanisms is yet to be fully established. In order to (a) further understand the role of extra-linguistic processing on parsing, and (b) gain a more direct insight into the online nature of parsing in Broca’s aphasia, 8 subjects underwent a series of wellformedness judgment investigations using both offline and online test batteries. The sentence types and error types used were motivated by three current theories about the nature of the ACD, namely, the Trace-Based Account (Grodzinsky, 2000), the Mapping Hypothesis (Linebarger et al., 1983) and Capacity proposals (e.g. Frazier & Friederici, 1991). The results from the present investigation speak directly to the three aforementioned theories and also demonstrate the important role that extralinguistic processing plays during offline assessment. The clinical implications of the different outcomes from the offline vs. online tasks are also discussed.
Resumo:
The Test of Mouse Proficiency (TOMP) was developed to assist occupational therapists and education professionals assess computer mouse competency skills in children from preschool to upper primary (elementary) school age. The preliminary reliability and validity of TOMP are reported in this paper. Methods used to examine the internal consistency, test-retest reliability, and criterion- and construct-related validity of the test are elaborated. In the continuing process of test refinement, these preliminary studies support to varying degrees the reliability and validity of TOMP. Recommendations for further validation of the assessment are discussed along with indications for potential clinical application.
Resumo:
The paper proposes a methodology especially focused on the generation of strategic plans of action, emphasizing the relevance of having a structured timeframe classification for the actions. The methodology explicitly recognizes the relevance of long-term goals as strategic drivers, which must insure that the complex system is capable to effectively respond to changes in the environment. In addition, the methodology employs engineering systems techniques in order to understand the inner working of the system and to build up alternative plans of action. Due to these different aspects, the proposed approach features higher flexibility compared to traditional methods. The validity and effectiveness of the methodology has been demonstrated by analyzing an airline company composed by 5 subsystems with the aim of defining a plan of action for the next 5 years, which can either: improve efficiency, redefine mission or increase revenues.
Resumo:
A Blumlein line is a particular Pulse Forming Line, PFL, configuration that allows the generation of high-voltage sub-microsecond square pulses, with the same voltage amplitude as the dc charging voltage, into a matching load. By stacking n Blumlein lines one can multiply in theory by n the input dc voltage charging amplitude. In order to understand the operating behavior of this electromagnetic system and to further optimize its operation it is fundamental to theoretically model it, that is to calculate the voltage amplitudes at each circuit point and the time instant that happens. In order to do this, one needs to define the reflection and transmission coefficients where impedance discontinuity occurs. The experimental results of a fast solid-state switch, which discharges a three stage Blumlein stack, will be compared with theoretical ones.
Resumo:
In the last years there has been a considerable increase in the number of people in need of intensive care, especially among the elderly, a phenomenon that is related to population ageing (Brown 2003). However, this is not exclusive of the elderly, as diseases as obesity, diabetes, and blood pressure have been increasing among young adults (Ford and Capewell 2007). As a new fact, it has to be dealt with by the healthcare sector, and particularly by the public one. Thus, the importance of finding new and cost effective ways for healthcare delivery are of particular importance, especially when the patients are not to be detached from their environments (WHO 2004). Following this line of thinking, a VirtualECare Multiagent System is presented in section 2, being our efforts centered on its Group Decision modules (Costa, Neves et al. 2007) (Camarinha-Matos and Afsarmanesh 2001).On the other hand, there has been a growing interest in combining the technological advances in the information society - computing, telecommunications and knowledge – in order to create new methodologies for problem solving, namely those that convey on Group Decision Support Systems (GDSS), based on agent perception. Indeed, the new economy, along with increased competition in today’s complex business environments, takes the companies to seek complementarities, in order to increase competitiveness and reduce risks. Under these scenarios, planning takes a major role in a company life cycle. However, effective planning depends on the generation and analysis of ideas (innovative or not) and, as a result, the idea generation and management processes are crucial. Our objective is to apply the GDSS referred to above to a new area. We believe that the use of GDSS in the healthcare arena will allow professionals to achieve better results in the analysis of one’s Electronically Clinical Profile (ECP). This attainment is vital, regarding the incoming to the market of new drugs and medical practices, which compete in the use of limited resources.
Resumo:
Dissertação de Mestrado, Geologia do Ambiente e Sociedade, 15 de Fevereiro de 2016, Universidade dos Açores.
Resumo:
One of the main trends in workplace aggression research is studying its antecedents. But the literature also reveals that some predictors remain understudied, like organizational change [1]. Additionally, possible mediators of this relationship were not investigated. The main objective of this research is studding the mediating effect of the leader political behavior (soft and hard version) on the relationship between organizational change and workplace aggression. Participants representing a wide variety of jobs across many organizations were surveyed. The measures used in this research are an Organizational Change Questionnaire climate of change, processes, and readiness [2], a Workplace Aggression Scale [e.g. 3, 4] and a Political Behavior Questionnaire [5]. The results of the study and its theoretical and practical implications will be presented and discussed.
Resumo:
This Thesis describes the application of automatic learning methods for a) the classification of organic and metabolic reactions, and b) the mapping of Potential Energy Surfaces(PES). The classification of reactions was approached with two distinct methodologies: a representation of chemical reactions based on NMR data, and a representation of chemical reactions from the reaction equation based on the physico-chemical and topological features of chemical bonds. NMR-based classification of photochemical and enzymatic reactions. Photochemical and metabolic reactions were classified by Kohonen Self-Organizing Maps (Kohonen SOMs) and Random Forests (RFs) taking as input the difference between the 1H NMR spectra of the products and the reactants. The development of such a representation can be applied in automatic analysis of changes in the 1H NMR spectrum of a mixture and their interpretation in terms of the chemical reactions taking place. Examples of possible applications are the monitoring of reaction processes, evaluation of the stability of chemicals, or even the interpretation of metabonomic data. A Kohonen SOM trained with a data set of metabolic reactions catalysed by transferases was able to correctly classify 75% of an independent test set in terms of the EC number subclass. Random Forests improved the correct predictions to 79%. With photochemical reactions classified into 7 groups, an independent test set was classified with 86-93% accuracy. The data set of photochemical reactions was also used to simulate mixtures with two reactions occurring simultaneously. Kohonen SOMs and Feed-Forward Neural Networks (FFNNs) were trained to classify the reactions occurring in a mixture based on the 1H NMR spectra of the products and reactants. Kohonen SOMs allowed the correct assignment of 53-63% of the mixtures (in a test set). Counter-Propagation Neural Networks (CPNNs) gave origin to similar results. The use of supervised learning techniques allowed an improvement in the results. They were improved to 77% of correct assignments when an ensemble of ten FFNNs were used and to 80% when Random Forests were used. This study was performed with NMR data simulated from the molecular structure by the SPINUS program. In the design of one test set, simulated data was combined with experimental data. The results support the proposal of linking databases of chemical reactions to experimental or simulated NMR data for automatic classification of reactions and mixtures of reactions. Genome-scale classification of enzymatic reactions from their reaction equation. The MOLMAP descriptor relies on a Kohonen SOM that defines types of bonds on the basis of their physico-chemical and topological properties. The MOLMAP descriptor of a molecule represents the types of bonds available in that molecule. The MOLMAP descriptor of a reaction is defined as the difference between the MOLMAPs of the products and the reactants, and numerically encodes the pattern of bonds that are broken, changed, and made during a chemical reaction. The automatic perception of chemical similarities between metabolic reactions is required for a variety of applications ranging from the computer validation of classification systems, genome-scale reconstruction (or comparison) of metabolic pathways, to the classification of enzymatic mechanisms. Catalytic functions of proteins are generally described by the EC numbers that are simultaneously employed as identifiers of reactions, enzymes, and enzyme genes, thus linking metabolic and genomic information. Different methods should be available to automatically compare metabolic reactions and for the automatic assignment of EC numbers to reactions still not officially classified. In this study, the genome-scale data set of enzymatic reactions available in the KEGG database was encoded by the MOLMAP descriptors, and was submitted to Kohonen SOMs to compare the resulting map with the official EC number classification, to explore the possibility of predicting EC numbers from the reaction equation, and to assess the internal consistency of the EC classification at the class level. A general agreement with the EC classification was observed, i.e. a relationship between the similarity of MOLMAPs and the similarity of EC numbers. At the same time, MOLMAPs were able to discriminate between EC sub-subclasses. EC numbers could be assigned at the class, subclass, and sub-subclass levels with accuracies up to 92%, 80%, and 70% for independent test sets. The correspondence between chemical similarity of metabolic reactions and their MOLMAP descriptors was applied to the identification of a number of reactions mapped into the same neuron but belonging to different EC classes, which demonstrated the ability of the MOLMAP/SOM approach to verify the internal consistency of classifications in databases of metabolic reactions. RFs were also used to assign the four levels of the EC hierarchy from the reaction equation. EC numbers were correctly assigned in 95%, 90%, 85% and 86% of the cases (for independent test sets) at the class, subclass, sub-subclass and full EC number level,respectively. Experiments for the classification of reactions from the main reactants and products were performed with RFs - EC numbers were assigned at the class, subclass and sub-subclass level with accuracies of 78%, 74% and 63%, respectively. In the course of the experiments with metabolic reactions we suggested that the MOLMAP / SOM concept could be extended to the representation of other levels of metabolic information such as metabolic pathways. Following the MOLMAP idea, the pattern of neurons activated by the reactions of a metabolic pathway is a representation of the reactions involved in that pathway - a descriptor of the metabolic pathway. This reasoning enabled the comparison of different pathways, the automatic classification of pathways, and a classification of organisms based on their biochemical machinery. The three levels of classification (from bonds to metabolic pathways) allowed to map and perceive chemical similarities between metabolic pathways even for pathways of different types of metabolism and pathways that do not share similarities in terms of EC numbers. Mapping of PES by neural networks (NNs). In a first series of experiments, ensembles of Feed-Forward NNs (EnsFFNNs) and Associative Neural Networks (ASNNs) were trained to reproduce PES represented by the Lennard-Jones (LJ) analytical potential function. The accuracy of the method was assessed by comparing the results of molecular dynamics simulations (thermal, structural, and dynamic properties) obtained from the NNs-PES and from the LJ function. The results indicated that for LJ-type potentials, NNs can be trained to generate accurate PES to be used in molecular simulations. EnsFFNNs and ASNNs gave better results than single FFNNs. A remarkable ability of the NNs models to interpolate between distant curves and accurately reproduce potentials to be used in molecular simulations is shown. The purpose of the first study was to systematically analyse the accuracy of different NNs. Our main motivation, however, is reflected in the next study: the mapping of multidimensional PES by NNs to simulate, by Molecular Dynamics or Monte Carlo, the adsorption and self-assembly of solvated organic molecules on noble-metal electrodes. Indeed, for such complex and heterogeneous systems the development of suitable analytical functions that fit quantum mechanical interaction energies is a non-trivial or even impossible task. The data consisted of energy values, from Density Functional Theory (DFT) calculations, at different distances, for several molecular orientations and three electrode adsorption sites. The results indicate that NNs require a data set large enough to cover well the diversity of possible interaction sites, distances, and orientations. NNs trained with such data sets can perform equally well or even better than analytical functions. Therefore, they can be used in molecular simulations, particularly for the ethanol/Au (111) interface which is the case studied in the present Thesis. Once properly trained, the networks are able to produce, as output, any required number of energy points for accurate interpolations.
Resumo:
To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation.