925 resultados para Subroutines in Procedural Programming Languages


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The broad aim of biomedical science in the postgenomic era is to link genomic and phenotype information to allow deeper understanding of the processes leading from genomic changes to altered phenotype and disease. The EuroPhenome project (http://www.EuroPhenome.org) is a comprehensive resource for raw and annotated high-throughput phenotyping data arising from projects such as EUMODIC. EUMODIC is gathering data from the EMPReSSslim pipeline (http://www.empress.har.mrc.ac.uk/) which is performed on inbred mouse strains and knock-out lines arising from the EUCOMM project. The EuroPhenome interface allows the user to access the data via the phenotype or genotype. It also allows the user to access the data in a variety of ways, including graphical display, statistical analysis and access to the raw data via web services. The raw phenotyping data captured in EuroPhenome is annotated by an annotation pipeline which automatically identifies statistically different mutants from the appropriate baseline and assigns ontology terms for that specific test. Mutant phenotypes can be quickly identified using two EuroPhenome tools: PhenoMap, a graphical representation of statistically relevant phenotypes, and mining for a mutant using ontology terms. To assist with data definition and cross-database comparisons, phenotype data is annotated using combinations of terms from biological ontologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We report the case study of a French-Spanish bilingual dyslexic girl, MP, who exhibited a severe visual attention (VA) span deficit but preserved phonological skills. Behavioural investigation showed a severe reduction of reading speed for both single items (words and pseudo-words) and texts in the two languages. However, performance was more affected in French than in Spanish. MP was administered an intensive VA span intervention programme. Pre-post intervention comparison revealed a positive effect of intervention on her VA span abilities. The intervention further transferred to reading. It primarily resulted in faster identification of the regular and irregular words in French. The effect of intervention was rather modest in Spanish that only showed a tendency for faster word reading. Text reading improved in the two languages with a stronger effect in French but pseudo-word reading did not improve in either French or Spanish. The overall results suggest that VA span intervention may primarily enhance the fast global reading procedure, with stronger effects in French than in Spanish. MP underwent two fMRI sessions to explore her brain activations before and after VA span training. Prior to the intervention, fMRI assessment showed that the striate and extrastriate visual cortices alone were activated but none of the regions typically involved in VA span. Post-training fMRI revealed increased activation of the superior and inferior parietal cortices. Comparison of pre- and post-training activations revealed significant activation increase of the superior parietal lobes (BA 7) bilaterally. Thus, we show that a specific VA span intervention not only modulates reading performance but further results in increased brain activity within the superior parietal lobes known to housing VA span abilities. Furthermore, positive effects of VA span intervention on reading suggest that the ability to process multiple visual elements simultaneously is one cause of successful reading acquisition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tämän diplomityön tavoitteena oli kehittää menetelmiä ja ohjeitataajuusmuuttajan sulautetun ohjelmiston kehityksen aikaiseen testaukseen. Soveltuvia menetelmiä etsittiin tutkimalla laajasti kirjallisuutta sekä selvittämälläyrityksen testauskäytäntöä. Tutkittuja kirjallisuudesta löytyneitä menetelmä olivat testauskehykset, simulointi ja staattinen sekä automaattinen testaus. Kirjallisuudesta etsittiin myös menetelmiä, joiden avulla testausprosessia voidaan helpottaa tai muuten parantaa. Tällaisista menetelmistä tutkittiin muun muassa testidatan valintaa, testauslähtöistä kehitystä sekä testattavuuden parantamista. Lisäksi selvitettiin uudelleenkäytettävien testien ohjelmointiin soveltuvia ohjelmointikieliä. Haastatteluiden ja dokumentaation avulla saatiin hyvä käsitys yrityksessä vallitsevasta testauskäytännöstä sekä sen ongelmakohdista. Testauksen ongelmiksi havaittiin testausprosessin järjestelmällisyyden puute sekä tarve suunnittelijoiden testauskoulutukseen. Testausprosessin parantamiseksi esitetään moduulitestauskehyksen käyttöönottoa. Lisäksi suunnittelijoiden testauskoulutuksella arvioidaan olevan suuri vaikutus koko testausprosessiin. Testitapausten suunnitteluun esitetään menetelmiä, joiden avulla voidaan suunnitella kattavampia testejä.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In bubbly flow simulations, bubble size distribution is an important factor in determination of hydrodynamics. Beside hydrodynamics, it is crucial in the prediction of interfacial area available for mass transfer and in the prediction of reaction rate in gas-liquid reactors such as bubble columns. Solution of population balance equations is a method which can help to model the size distribution by considering continuous bubble coalescence and breakage. Therefore, in Computational Fluid Dynamic simulations it is necessary to couple CFD and Population Balance Model (CFD-PBM) to get reliable distribution. In the current work a CFD-PBM coupled model is implemented as FORTRAN subroutines in ANSYS CFX 10 and it has been tested for bubbly flow. This model uses the idea of Multi Phase Multi Size Group approach which was previously presented by Sha et al. (2006) [18]. The current CFD-PBM coupled method considers inhomogeneous flow field for different bubble size groups in the Eulerian multi-dispersed phase systems. Considering different velocity field for bubbles can give the advantageof more accurate solution of hydrodynamics. It is also an improved method for prediction of bubble size distribution in multiphase flow compared to available commercial packages.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The nature of client-server architecture implies that some modules are delivered to customers. These publicly distributed commercial software components are under risk, because users (and simultaneously potential malefactors) have physical access to some components of the distributed system. The problem becomes even worse if interpreted programming languages are used for creation of client side modules. The language Java, which was designed to be compiled into platform independent byte-code is not an exception and runs the additional risk. Along with advantages like verifying the code before execution (to ensure that program does not produce some illegal operations)Java has some disadvantages. On a stage of byte-code a java program still contains comments, line numbers and some other instructions, which can be used for reverse-engineering. This Master's thesis focuses on protection of Java code based client-server applications. I present a mixture of methods to protect software from tortious acts. Then I shall realize all the theoretical assumptions in a practice and examine their efficiency in examples of Java code. One of the criteria's to evaluate the system is that my product is used for specialized area of interactive television.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The short version of the Oxford-Liverpool Inventory of Feelings and Experiences (sO-LIFE) is a widely used measure assessing schizotypy. There is limited information, however, on how sO-LIFE scores compare across different countries. The main goal of the present study is to test the measurement invariance of the sO-LIFE scores in a large sample of non-clinical adolescents and young adults from four European countries (UK, Switzerland, Italy, and Spain). The scores were obtained from validated versions of the sO-LIFE in their respective languages. The sample comprised 4190 participants (M = 20.87 years; SD = 3.71 years). The study of the internal structure, using confirmatory factor analysis, revealed that both three (i.e., positive schizotypy, cognitive disorganisation, and introvertive anhedonia) and four-factor (i.e., positive schizotypy, cognitive disorganisation, introvertive anhedonia, and impulsive nonconformity) models fitted the data moderately well. Multi-group confirmatory factor analysis showed that the three-factor model had partial strong measurement invariance across countries. Eight items were non-invariant across samples. Significant statistical differences in the mean scores of the s-OLIFE were found by country. Reliability scores, estimated with Ordinal alpha ranged from 0.75 to 0.87. Using the Item Response Theory framework, the sO-LIFE provides more accuracy information at the medium and high end of the latent trait. The current results show further evidence in support of the psychometric proprieties of the sO-LIFE, provide new information about the cross-cultural equivalence of schizotypy and support the use of this measure to screen for psychotic-like features and liability to psychosis in general population samples from different European countries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As the development of integrated circuit technology continues to follow Moore’s law the complexity of circuits increases exponentially. Traditional hardware description languages such as VHDL and Verilog are no longer powerful enough to cope with this level of complexity and do not provide facilities for hardware/software codesign. Languages such as SystemC are intended to solve these problems by combining the powerful expression of high level programming languages and hardware oriented facilities of hardware description languages. To fully replace older languages in the desing flow of digital systems SystemC should also be synthesizable. The devices required by modern high speed networks often share the same tight constraints for e.g. size, power consumption and price with embedded systems but have also very demanding real time and quality of service requirements that are difficult to satisfy with general purpose processors. Dedicated hardware blocks of an application specific instruction set processor are one way to combine fast processing speed, energy efficiency, flexibility and relatively low time-to-market. Common features can be identified in the network processing domain making it possible to develop specialized but configurable processor architectures. One such architecture is the TACO which is based on transport triggered architecture. The architecture offers a high degree of parallelism and modularity and greatly simplified instruction decoding. For this M.Sc.(Tech) thesis, a simulation environment for the TACO architecture was developed with SystemC 2.2 using an old version written with SystemC 1.0 as a starting point. The environment enables rapid design space exploration by providing facilities for hw/sw codesign and simulation and an extendable library of automatically configured reusable hardware blocks. Other topics that are covered are the differences between SystemC 1.0 and 2.2 from the viewpoint of hardware modeling, and compilation of a SystemC model into synthesizable VHDL with Celoxica Agility SystemC Compiler. A simulation model for a processor for TCP/IP packet validation was designed and tested as a test case for the environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper provides a spatial and temporal multi-scale approach of European submarine canyons. We fi rst present the long-term geologic view of European margins as related to controls on submarine canyon development. Then we discuss the extent to which submarine canyon systems resemble river systems because both essentially form drainage networks. Finally, we deal with the hortest-term, highestresolution scale to get a fl avor of the current functioning and health of modern submarine canyons in the northwestern Mediterranean Sea. Submarine canyons are unique features of the seafl oor whose existence was known by European fi shermen centuries ago, especially for those canyons that have their heads at short distance from shoreline. Popular names given to specifi c canyons in the different languages spoken in European coastal communities refer to the concepts of a"deep" or"trench." In the old times it was also common thinking that submarine canyons where so deep that nobody could measure their depth or even that they had no bottom. Submarine canyons are just one of the seven different types of seafl oor valleys identifi ed by Shepard (1973) in his pioneering morphogenetic classifi cation. Shepard (1973) defined submarine canyons as"steep-walled, sinuous valleys, with V-shaped cross sections, and relief comparable even to the largest of land canyons; tributaries are found in most of the canyons and rock outcrops abound on their walls." Canyons are features typical of continental slopes with their upper reaches and heads cut into the continental shelf.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is an experimental study regarding the identification and discrimination of vowels, studied using synthetic stimuli. The acoustic attributes of synthetic stimuli vary, which raises the question of how different spectral attributes are linked to the behaviour of the subjects. The spectral attributes used are formants and spectral moments (centre of gravity, standard deviation, skewness and kurtosis). Two types of experiments are used, related to the identification and discrimination of the stimuli, respectively. The discrimination is studied by using both the attentive procedures that require a response from the subject, and the preattentive procedures that require no response. Together, the studies offer information about the identification and discrimination of synthetic vowels in 15 different languages. Furthermore, this thesis discusses the role of various spectral attributes in the speech perception processes. The thesis is divided into three studies. The first is based only on attentive methods, whereas the other two concentrate on the relationship between identification and discrimination experiments. The neurophysiological methods (EEG recordings) reveal the role of attention in processing, and are used in discrimination experiments, while the results reveal differences in perceptual processes based on the language, attention and experimental procedure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En aquest món on ens ha tocat viure i patir canvis tan durs amb la crisi econòmica que patim, que ens ha fet passar de lligar els gossos amb llonganisses a vigilar en les despeses del dia a dia per poder arribar just a final de mes, és el moment de reinventar-se. És per aquest motiu que presento aquesta idea, on el seu objectiu és desenvolupar una pàgina web que esdevingui un punt de trobada entre usuaris que volen transmetre o ampliar el seu coneixement i oferir-los la possibilitat que entre ells puguin compartir les seves habilitats i destreses. El web consistirà en un panell d’activitats on els usuaris un cop s’hagin registrat puguin crear les activitats que vulguin aprendre o bé ensenyar, tot demanant, si ho desitgen, quelcom a canvi. Aleshores la resta d’usuaris si els interessa l’activitat, poden acceptar la demanda o bé fer una proposta pròpia. A partir d’aquí els usuaris s’han de posar d’acord a l’hora de dur a terme l’activitat. El web disposarà d’una part pels usuaris amb permisos d’administrador perquè puguin gestionar el portal. Aquest projecte s’ha desenvolupat amb el framework de PHP Codeigniter, el qual utilitza la programació per capes MVC, la qual separa la programació en tres parts: el Model, la Vista i el Controlador. També s’han utilitzat els llenguatges HTML5 i CSS3, i jQuery, que és una llibreria de JavaScript. Com a sistema gestor de base de dades s’ha utilitzat el MySQL.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Software integration is a stage in a software development process to assemble separate components to produce a single product. It is important to manage the risks involved and being able to integrate smoothly, because software cannot be released without integrating it first. Furthermore, it has been shown that the integration and testing phase can make up 40 % of the overall project costs. These issues can be mitigated by using a software engineering practice called continuous integration. This thesis work presents how continuous integration is introduced to the author's employer organisation. This includes studying how the continuous integration process works and creating the technical basis to start using the process on future projects. The implemented system supports software written in C and C++ programming languages on Linux platform, but the general concepts can be applied to any programming language and platform by selecting the appropriate tools. The results demonstrate in detail what issues need to be solved when the process is acquired in a corporate environment. Additionally, they provide an implementation and process description suitable to the organisation. The results show that continuous integration can reduce the risks involved in a software process and increase the quality of the product as well.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este artículo trata de la aplicación de las competencias básicas en el currículum de Educación Primaria. El objetivo que persigue es ofrecer algunas estrategias para ayudar a los maestros a integrar las competencias básicas en los métodos de programación y evaluación. Con este fin, y para prever las posibles dificultades en la implementación de las competencias básicas, en la primera parte del artículo se analiza la situación actual a partir de la lectura de diversos documentos legales vigentes. A continuación, en la segunda parte del artículo se aportan algunas herramientas para facilitar esta integración desde las áreas de lengua y de matemáticas. Realizamos esta aproximación desde la didáctica de la lengua y de las matemáticas por su carácter instrumental para la adquisición de otros conocimientos

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Immaturity of the gut barrier system in the newborn has been seen to underlie a number of chronic diseases originating in infancy and manifesting later in life. The gut microbiota and breast milk provide the most important maturing signals for the gut-related immune system and reinforcement of the gut mucosal barrier function. Recently, the composition of the gut microbiota has been proposed to be instrumental in control of host body weight and metabolism as well as the inflammatory state characterizing overweight and obesity. On this basis, inflammatory Western lifestyle diseases, including overweight development, may represent a potential target for probiotic interventions beyond the well documented clinical applications. The purpose of the present undertaking was to study the efficacy and safety of perinatal probiotic intervention. The material comprised two ongoing, prospective, double-blind NAMI (Nutrition, Allergy, Mucosal immunology and Intestinal microbiota) probiotic interventions. In the mother-infant nutrition and probiotic study altogether 256 women were randomized at their first trimester of pregnancy into a dietary intervention and a control group. The intervention group received intensive dietary counselling provided by a nutritionist, and were further randomized at baseline, double-blind, to receive probiotics (Lactobacillus rhamnosus GG and Bifidobacterium lactis) or placebo. The intervention period extended from the first trimester of pregnancy to the end of exclusive breastfeeding. In the allergy prevention study altogether 159 women were randomized, double-blind, to receive probiotics (Lactobacillus rhamnosus GG) or placebo 4 weeks before expected delivery, the intervention extending for 6 months postnatally. Additionally, patient data on all premature infants with very low birth weight (VLBW) treated in the Department of Paediatrics, Turku University Hospital, during the years 1997 - 2008 were utilized. The perinatal probiotic intervention reduced the risk of gestational diabetes mellitus (GDM) in the mothers and perinatal dietary counselling reduced that of fetal overgrowth in GDM-affected pregnancies. Early gut microbiota modulation with probiotics modified the growth pattern of the child by restraining excessive weight gain during the first years of life. The colostrum adiponectin concentration was demonstrated to be dependent on maternal diet and nutritional status during pregnancy. It was also higher in the colostrum received by normal-weight compared to overweight children at the age of 10 years. The early perinatal probiotic intervention and the postnatal probiotic intervention in VLBW infants were shown to be safe. To conclude, the findings in this study provided clinical evidence supporting the involvement of the initial microbial and nutritional environment in metabolic programming of the child. The manipulation of early gut microbial communities with probiotics might offer an applicable strategy to impact individual energy homeostasis and thus to prevent excessive body-weight gain. The results add weight to the hypothesis that interventions aiming to prevent obesity and its metabolic consequences later in life should be initiated as early as during the perinatal period.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Information about capacity of transport and dispersion of soluble pollutants in natural streams are important in the management of water resources, especially in planning preventive measures to minimize the problems caused by accidental or intentional waste, in public health and economic activities that depend on the use of water. Considering this importance, this study aimed to develop a warning system for rivers, based on experimental techniques using tracers and analytical equations of one-dimensional transport of soluble pollutants conservative, to subsidizing the decision-making in the management of water resources. The system was development in JAVA programming language and MySQL database can predict the travel time of pollutants clouds from a point of eviction and graphically displays the temporal distribution of concentrations of passage clouds, in a particular location, downstream from the point of its launch.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.