994 resultados para Geometric Semantic Genetic Programming


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Learning computer programming requires solving programming exercises. In computer programming courses teachers need to assess and give feedback to a large number of exercises. These tasks are time consuming and error-prone since there are many aspects relating to good programming that should be considered. In this context automatic assessment tools can play an important role helping teachers in grading tasks as well to assist students with automatic feedback. In spite of its usefulness, these tools lack integration mechanisms with other eLearning systems such as Learning Management Systems, Learning Objects Repositories or Integrated Development Environments. In this paper we provide a survey on programming evaluation systems. The survey gathers information on interoperability features of these systems, categorizing and comparing them regarding content and communication standardization. This work may prove useful to instructors and computer science educators when they have to choose an assessment system to be integrated in their e-Learning environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a tool called Petcha that acts as an automated Teaching Assistant in computer programming courses. The ultimate objective of Petcha is to increase the number of programming exercises effectively solved by students. Petcha meets this objective by helping both teachers to author programming exercises and students to solve them. It also coordinates a network of heterogeneous systems, integrating automatic program evaluators, learning management systems, learning object repositories and integrated programming environments. This paper presents the concept and the design of Petcha and sets this tool in a service oriented architecture for managing learning processes based on the automatic evaluation of programming exercises. The paper presents also a case study that validates the use of Petcha and of the proposed architecture.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Assessment plays a vital role in learning. This is certainly the case with assessment of computer programs, both in curricular and competitive learning. The lack of a standard – or at least a widely used format – creates a modern Ba- bel tower made of Learning Objects, of assessment items that cannot be shared among automatic assessment systems. These systems whose interoperability is hindered by the lack of a common format include contest management systems, evaluation engines, repositories of learning objects and authoring tools. A prag- matical approach to remedy this problem is to create a service to convert among existing formats. A kind of translation service specialized in programming prob- lems formats. To convert programming exercises on-the-fly among the most used formats is the purpose of the BabeLO – a service to cope with the existing Babel of Learning Object formats for programming exercises. BabeLO was designed as a service to act as a middleware in a network of systems typically used in auto- matic assessment of programs. It provides support for multiple exercise formats and can be used by: evaluation engines to assess exercises regardless of its format; repositories to import exercises from various sources; authoring systems to create exercises in multiple formats or based on exercises from other sources. This paper analyses several of existing formats to highlight both their differ- ences and their similar features. Based on this analysis it presents an approach to extensible format conversion. It presents also the features of PExIL, the pivotal format in which the conversion is based; and the function definitions of the proposed service – BabeLO. Details on the design and implementation of BabeLO, including the service API and the interfaces required to extend the conversion to a new format, are also provided. To evaluate the effectiveness and efficiency of this approach this paper reports on two actual uses of BabeLO: to relocate exercises to a different repository; and to use an evaluation engine in a network of heterogeneous systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Extracting the semantic relatedness of terms is an important topic in several areas, including data mining, information retrieval and web recommendation. This paper presents an approach for computing the semantic relatedness of terms using the knowledge base of DBpedia — a community effort to extract structured information from Wikipedia. Several approaches to extract semantic relatedness from Wikipedia using bag-of-words vector models are already available in the literature. The research presented in this paper explores a novel approach using paths on an ontological graph extracted from DBpedia. It is based on an algorithm for finding and weighting a collection of paths connecting concept nodes. This algorithm was implemented on a tool called Shakti that extract relevant ontological data for a given domain from DBpedia using its SPARQL endpoint. To validate the proposed approach Shakti was used to recommend web pages on a Portuguese social site related to alternative music and the results of that experiment are reported in this paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several standards have appeared in recent years to formalize the metadata of learning objects, but they are still insufficient to fully describe a specialized domain. In particular, the programming exercise domain requires interdependent resources (e.g. test cases, solution programs, exercise description) usually processed by different services in the programming exercise lifecycle. Moreover, the manual creation of these resources is time-consuming and error-prone, leading to an obstacle to the fast development of programming exercises of good quality. This chapter focuses on the definition of an XML dialect called PExIL (Programming Exercises Interoperability Language). The aim of PExIL is to consolidate all the data required in the programming exercise lifecycle from when it is created to when it is graded, covering also the resolution, the evaluation, and the feedback. The authors introduce the XML Schema used to formalize the relevant data of the programming exercise lifecycle. The validation of this approach is made through the evaluation of the usefulness and expressiveness of the PExIL definition. In the former, the authors present the tools that consume the PExIL definition to automatically generate the specialized resources. In the latter, they use the PExIL definition to capture all the constraints of a set of programming exercises stored in a learning objects repository.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

IEEE International Symposium on Circuits and Systems, pp. 724 – 727, Seattle, EUA

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The iterative simulation of the Brownian bridge is well known. In this article, we present a vectorial simulation alternative based on Gaussian processes for machine learning regression that is suitable for interpreted programming languages implementations. We extend the vectorial simulation of path-dependent trajectories to other Gaussian processes, namely, sequences of Brownian bridges, geometric Brownian motion, fractional Brownian motion, and Ornstein-Ulenbeck mean reversion process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computerized scheduling methods and computerized scheduling systems according to exemplary embodiments. A computerized scheduling method may be stored in a memory and executed on one or more processors. The method may include defining a main multi-machine scheduling problem as a plurality of single machine scheduling problems; independently solving the plurality of single machine scheduling problems thereby calculating a plurality of near optimal single machine scheduling problem solutions; integrating the plurality of near optimal single machine scheduling problem solutions into a main multi-machine scheduling problem solution; and outputting the main multi-machine scheduling problem solution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Aljezur "graben" is a crucial piece in understanding the Caenozoic evolution of the SW atlantic portuguese edge. Detailed study of the sedimentary filling and bordering accidents allows the identification of several evolution steps since the Miocene. The graben is bordered by accidents that dislocate geomorphologic surfaces (Littoral Platform to the W, Interior Platform to the E), and also Neogene sedimentary units. The sedimentary filling is composed by conglomerates and sands grading into clays and bioclastic limestones (Burdigalian to Serravalian), upon which lie unconformably fine reddish sands, sometimes with abundant micas. Genetic and geometric relationships between these sands, those in higher surfaces outside the "graben" and the main bordering faults, are discussed. As a conclusion, the reconstruction of the tectono-sedimentary evolution is attempted, integrating it in a "pull-apart" context associated with the Messejana-fault system and it's reactivation by the differently orientated alpine compressions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider an optimal control problem with a deterministic finite horizon and state variable dynamics given by a Markov-switching jump–diffusion stochastic differential equation. Our main results extend the dynamic programming technique to this larger family of stochastic optimal control problems. More specifically, we provide a detailed proof of Bellman’s optimality principle (or dynamic programming principle) and obtain the corresponding Hamilton–Jacobi–Belman equation, which turns out to be a partial integro-differential equation due to the extra terms arising from the Lévy process and the Markov process. As an application of our results, we study a finite horizon consumption– investment problem for a jump–diffusion financial market consisting of one risk-free asset and one risky asset whose coefficients are assumed to depend on the state of a continuous time finite state Markov process. We provide a detailed study of the optimal strategies for this problem, for the economically relevant families of power utilities and logarithmic utilities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Resumo: A decisão da terapêutica hormonal no tratamento do cancro da mama baseiase na determinação do receptor de estrogénio alfa por imunohistoquímica (IHC). Contudo, a presença deste receptor não prediz a resposta em todas as situações, em parte devido a limitações do método IHC. Investigámos se a expressão dos genes ESR1 e ESR2, bem como a metilação dos respectivos promotores, pode estar relacionada com a evolução desfavorável de uma proporção de doentes tratados com tamoxifeno assim como com a perda dos receptores de estrogénio alfa (ERα) e beta (ERß). Amostras de 211 doentes com cancro da mama diagnosticado entre 1988 e 2004, fixadas em formalina e preservadas em parafina, foram utilizadas para a determinação por IHC da presença dos receptores ERα e ERß. O mRNA total do gene ESR1 e os níveis específicos do transcrito derivado do promotor C (ESR1_C), bem como dos transcritos ESR2_ß1, ESR2_ß2/cx, and ESR2_ß5 foram avaliados por Real-time PCR. Os promotores A e C do gene ESR1 e os promotores 0K e 0N do gene ESR2 foram investigados por análise de metilação dos dinucleotidos CpG usando bisulfite-PCR para análise com enzimas de restrição, ou para methylation specific PCR. Atendendo aos resultados promissores relacionados com a metilação do promotor do gene ESR1, complementamos o estudo com um método quantitativo por matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) suportado pelo software Epityper para a medição da metilação nos promotores A e C. Fez-se a avaliação da estabilidade do mRNA nas linhas celulares de cancro da mama MCF-7 e MDA-MB-231 tratadas com actinomicina D. Baixos níveis do transcrito ESR1_C associaram-se a uma melhor sobrevivência global (p = 0.017). Níveis elevados do transcrito ESR1_C associaram-se a uma resposta inferior ao tamoxifeno (HR = 2.48; CI 95% 1.24-4.99), um efeito mais pronunciado em doentes com tumores de fenótipo ERα/PgR duplamente positivo (HR = 3.41; CI 95% 1.45-8.04). A isoforma ESR1_C mostrou ter uma semi-vida prolongada, bem como uma estrutura secundária da região 5’UTR muito mais relaxada em comparação com a isoforma ESR1_A. A análise por Western-blot mostrou que ao nível da 21 proteína, a selectividade de promotores é indistinguivel. Não se detectou qualquer correlação entre os níveis das isoformas do gene ESR2 ou entre a metilação dos promotores do gene ESR2, e a detecção da proteína ERß. A metilação do promotor C do gene ESR1, e não do promotor A, foi responsável pela perda do receptor ERα. Estes resultados sugerem que os níveis do transcrito ESR1_C sejam usados como um novo potencial marcador para o prognóstico e predição de resposta ao tratamento com tamoxifeno em doentes com cancro da mama. Abstract: The decision of endocrine breast cancer treatment relies on ERα IHC-based assessment. However, ER positivity does not predict response in all cases in part due to IHC methodological limitations. We investigated whether ESR1 and ESR2 gene expression and respective promoter methylation may be related to non-favorable outcome of a proportion of tamoxifen treated patients as well as to ERα and ERß loss. Formalin-fixed paraffin-embedded breast cancer samples from 211 patients diagnosed between 1988 and 2004 were submitted to IHC-based ERα and ERß protein determination. ESR1 whole mRNA and promoter C specific transcript levels, as well as ESR2_ß1, ESR2_ß2/cx, and ESR2_ß5 transcripts were assessed by real-time PCR. ESR1 promoters A and C, and ESR2 promoters 0N and 0K were investigated by CpG methylation analysis using bisulfite-PCR for restriction analysis, or methylation specific PCR. Due to the promising results related to ESR1 promoter methylation, we have used a quantification method by matrixassisted laser desorption/ionization time-of-flight mass spectrometry (MALDITOF MS) together with Epityper software to measure methylation at promoters A and C. mRNA stability was assessed in actinomycin D treated MCF-7 and MDA-MB-231 cells. ERα protein was quantified using transiently transfected breast cancer cells. Low ESR1_C transcript levels were associated with better overall survival (p = 0.017). High levels of ESR1_C transcript were associated with non-favorable response in tamoxifen treated patients (HR = 2.48; CI 95% 1.24-4.99), an effect that was more pronounced in patients with ERα/PgR double-positive tumors (HR = 3.41; CI 95% 1.45-8.04). The ESR1_C isoform had a prolonged mRNA half-life and a more relaxed 5’UTR structure compared to ESR1_A isoform. Western-blot analysis showed that at protein level, the promoter selectivity is undistinguishable. There was no correlation between levels of ESR2 isoforms or ESR2 promoter methylation and ERß protein staining. ESR1 promoter C CpG methylation and not promoter A was responsible for ERα loss. We propose ESR1_C levels as a putative novel marker for breast cancer prognosis and prediction of tamoxifen response.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concepts and instruments required for the teaching and learning of geometric optics are introduced in the didactic processwithout a proper didactic transposition. This claim is secured by the ample evidence of both wide- and deep-rooted alternative concepts on the topic. Didactic transposition is a theory that comes from a reflection on the teaching and learning process in mathematics but has been used in other disciplinary fields. It will be used in this work in order to clear up the main obstacles in the teachinglearning process of geometric optics. We proceed to argue that since Newton’s approach to optics, in his Book I of Opticks, is independent of the corpuscular or undulatory nature of light, it is the most suitable for a constructivist learning environment. However, Newton’s theory must be subject to a proper didactic transposition to help overcome the referred alternative concepts. Then is described our didactic transposition in order to create knowledge to be taught using a dialogical process between students’ previous knowledge, history of optics and the desired outcomes on geometrical optics in an elementary pre-service teacher training course. Finally, we use the scheme-facet structure of knowledge both to analyse and discuss our results as well as to illuminate shortcomings that must be addressed in our next stage of the inquiry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the present study we report the results of an analysis, based on serotyping, multilocus enzyme electrophoresis (MEE), and ribotyping of N. meningitidis serogroup C strains isolated from patients with meningococcal disease (MD) in Rio Grande do Sul (RS) and Santa Catarina (SC) States, Brazil, as the Center of Epidemiology Control of Ministry of Health detected an increasing of MD cases due to this serogroup in the last two years (1992-1993). We have demonstrated that the MD due to N.meningitidis serogroup C strains in RS and SC States occurring in the last 4 years were caused mainly by one clone of strains (ET 40), with isolates indistinguishable by serogroup, serotype, subtype and even by ribotyping. One small number of cases that were not due to an ET 40 strains, represent closely related clones that probably are new lineages generated from the ET 40 clone referred as ET 11A complex. We have also analyzed N.meningitidis serogroup C strains isolated in the greater São Paulo in 1976 as representative of the first post epidemic year in that region. The ribotyping method, as well as MEE, could provide useful information about the clonal characteristics of those isolates and also of strains isolated in south Brazil. The strains from 1976 have more similarity with the actual endemic than epidemic strains, by the ribotyping, sulfonamide sensitivity, and MEE results. In conclusion, serotyping with monoclonal antibodies (C:2b:P1.3), MEE (ET 11 and ET 11A complex), and ribotyping by using ClaI restriction enzyme (Rb2), were useful to characterize these epidemic strains of N.meningitidis related to the increased incidence of MD in different States of south Brazil. It is mostly probable that these N.meningitidis serogroup C strains have poor or no genetic corelation with 1971-1975 epidemic serogroup C strains. The genetic similarity of members of the ET 11 and ET 11A complex were confirmed by the ribotyping method by using three restriction endonucleases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the present study we report the results of an analysis, based on ribotyping of Corynebacterium diphtheriae intermedius strains isolated from a 9 years old child with clinical diphtheria and his 5 contacts. Quantitative analysis of RFLPs of rRNA was used to determine relatedness of these 7 C.diphtheriae strains providing support data in the diphtheria epidemiology. We have also tested those strains for toxigenicity in vitro by using the Elek's gel diffusion method and in vivo by using cell culture method on cultured monkey kidney cell (VERO cells). The hybridization results revealed that the 5 C.diphtheriae strains isolated from contacts and one isolated from the clinical case (nose case strain) had identical RFLP patterns with all 4 restriction endonucleases used, ribotype B. The genetic distance from this ribotype and ribotype A (throat case strain), that we initially assumed to be responsible for the illness of the patient, was of 0.450 showing poor genetic correlation among these two ribotypes. We found no significant differences concerned to the toxin production by using the cell culture method. In conclusion, the use of RFLPs of rRNA gene was successful in detecting minor differences in closely related toxigenic C.diphtheriae intermedius strains and providing information about genetic relationships among them.