997 resultados para Multiple Programming
Resumo:
Virtual Laboratories are an indispensablespace for developing practical activities in a Virtual Environment. In the field of Computer and Software Engineering different types of practical activities have tobe performed in order to obtain basic competences which are impossible to achieve by other means. This paper specifies an ontology for a general virtual laboratory.The proposed ontology provides a mechanism to select the best resources needed in a Virtual Laboratory once a specific practical activity has been defined and the maincompetences that students have to achieve in the learning process have been fixed. Furthermore, the proposed ontology can be used to develop an automatic and wizardtool that creates a Moodle Classroom using the practical activity specification and the related competences.
Resumo:
Peer-reviewed
Resumo:
Correct species identification is a crucial issue in systematics with key implications for prioritising conservation effort. However, it can be particularly challenging in recently diverged species due to their strong similarity and relatedness. In such cases, species identification requires multiple and integrative approaches. In this study we used multiple criteria, namely plumage colouration, biometric measurements, geometric morphometrics, stable isotopes analysis (SIA) and genetics (mtDNA), to identify the species of 107 bycatch birds from two closely related seabird species, the Balearic (Puffinus mauretanicus) and Yelkouan (P. yelkouan) shearwaters. Biometric measurements, stable isotopes and genetic data produced two stable clusters of bycatch birds matching the two study species, as indicated by reference birds of known origin. Geometric morphometrics was excluded as a species identification criterion since the two clusters were not stable. The combination of plumage colouration, linear biometrics, stable isotope and genetic criteria was crucial to infer the species of 103 of the bycatch specimens. In the present study, particularly SIA emerged as a powerful criterion for species identification, but temporal stability of the isotopic values is critical for this purpose. Indeed, we found some variability in stable isotope values over the years within each species, but species differences explained most of the variance in the isotopic data. Yet this result pinpoints the importance of examining sources of variability in the isotopic data in a case-by-case basis prior to the cross-application of the SIA approach to other species. Our findings illustrate how the integration of several methodological approaches can help to correctly identify individuals from recently diverged species, as each criterion measures different biological phenomena and species divergence is not expressed simultaneously in all biological traits.
Resumo:
Debido a que en España se han dado diferentes accidentes con múltiples víctimas en diferentes puntos del país, se cree conveniente que cada ciudad tenga un plan de gestión de catástrofes con una serie de espacios habilitados para poder responder en caso de un accidente de estas características. El presente artículo tiene como objetivo presentar la habilitación de espacios aplicable en cualquier capital de provincia de España, tomando como ejemplo la ciudad de Girona. Se ha tomado la ciudad de Girona como modelo por presentar infraestructuras viarias, ferroviarias, portuarias y aeroportuarias cada vez más concurridas
Resumo:
Actualment un típic embedded system (ex. telèfon mòbil) requereix alta qualitat per portar a terme tasques com codificar/descodificar a temps real; han de consumir poc energia per funcionar hores o dies utilitzant bateries lleugeres; han de ser el suficientment flexibles per integrar múltiples aplicacions i estàndards en un sol aparell; han de ser dissenyats i verificats en un període de temps curt tot i l’augment de la complexitat. Els dissenyadors lluiten contra aquestes adversitats, que demanen noves innovacions en arquitectures i metodologies de disseny. Coarse-grained reconfigurable architectures (CGRAs) estan emergent com a candidats potencials per superar totes aquestes dificultats. Diferents tipus d’arquitectures han estat presentades en els últims anys. L’alta granularitat redueix molt el retard, l’àrea, el consum i el temps de configuració comparant amb les FPGAs. D’altra banda, en comparació amb els tradicionals processadors coarse-grained programables, els alts recursos computacionals els permet d’assolir un alt nivell de paral•lelisme i eficiència. No obstant, els CGRAs existents no estant sent aplicats principalment per les grans dificultats en la programació per arquitectures complexes. ADRES és una nova CGRA dissenyada per I’Interuniversity Micro-Electronics Center (IMEC). Combina un processador very-long instruction word (VLIW) i un coarse-grained array per tenir dues opcions diferents en un mateix dispositiu físic. Entre els seus avantatges destaquen l’alta qualitat, poca redundància en les comunicacions i la facilitat de programació. Finalment ADRES és un patró enlloc d’una arquitectura concreta. Amb l’ajuda del compilador DRESC (Dynamically Reconfigurable Embedded System Compile), és possible trobar millors arquitectures o arquitectures específiques segons l’aplicació. Aquest treball presenta la implementació d’un codificador MPEG-4 per l’ADRES. Mostra l’evolució del codi per obtenir una bona implementació per una arquitectura donada. També es presenten les característiques principals d’ADRES i el seu compilador (DRESC). Els objectius són de reduir al màxim el nombre de cicles (temps) per implementar el codificador de MPEG-4 i veure les diferents dificultats de treballar en l’entorn ADRES. Els resultats mostren que els cícles es redueixen en un 67% comparant el codi inicial i final en el mode VLIW i un 84% comparant el codi inicial en VLIW i el final en mode CGA.
Resumo:
The aim of this project is to get used to another kind of programming. Since now, I used very complex programming languages to develop applications or even to program microcontrollers, but PicoCricket system is the evidence that we don’t need so complex development tools to get functional devices. PicoCricket system is the clear example of simple programming to make devices work the way we programmed it. There’s an easy but effective way to programs mall devices just saying what we want them to do. We cannot do complex algorithms and mathematical operations but we can program them in a short time. Nowadays, the easier and faster we produce, the more we earn. So the tendency is to develop fast, cheap and easy, and PicoCricket system can do it.
Resumo:
The underlying cause of many human autoimmune diseases is unknown, but several environmental factors are implicated in triggering the self-destructive immune reactions. Multiple Sclerosis (MS) is a chronic autoimmune disease of the central nervous system, potentially leading to persistent neurological deterioration. The cause of MS is not known, and apart from immunomodulatory treatments there is no cure. In the early phase of the disease, relapsing-remitting MS (RR-MS) is characterized by unpredictable exacerbations of the neurological symptoms called relapses, which can occur at different intervals ranging from 4 weeks to several years. Microbial infections are known to be able to trigger MS relapses, and the patients are instructed to avoid all factors that might increase the risk of infections and to properly use antibiotics as well as to take care of dental hygiene. Among those environmental factors which are known to increase susceptibility to infections, high ambient air inhalable particulate matter levels affect all people within a geographical region. During the period of interest in this thesis, the occurrence of MS relapses could be effectively reduced by injections of interferon, which has immunomodulatory and antiviral properties. In this thesis, ecological and epidemiological analyses were used to study the possible connection between MS relapse occurrence, population level viral infections and air quality factors, as well as the effects of interferon medication. Hospital archive data were collected retrospectively from 1986-2001, a period in time ranging from when interferon medication first became available until just before other disease-modifying MS therapies arrived on the market. The grouped data were studied with logistic regression and intervention analysis, and individual patient data with survival analysis. Interferons proved to be effective in the treatment of MS in this observational study, as the amount of MS exacerbations was lower during interferon use as compared to the time before interferon treatment. A statistically significant temporal relationship between MS relapses and inhalable particular matter (PM10) concentrations was found in this study, which implies that MS patients are affected by the exposure to PM10. Interferon probably protected against the effect of PM10, because a significant increase in the risk of exacerbations was only observed in MS patients without interferon medication following environmental exposure to population level specific viral infections and PM10. Apart from being antiviral, interferon could thus also attenuate the enhancement of immune reactions caused by ambient air PM10. The retrospective approach utilizing carefully constructed hospital records proved to be an economical and reliable source of MS disease information for statistical analyses.
Resumo:
Most Finnish periodical magazines have a website, often an online service. The objective of this thesis is to understand the magazines’ resources and capabilities and match them with online strategies’ goals and objectives. The thesis’ theoretical part focuses on explaining and classifying resources, capabilities, goals and objectives, and applying everything into Finnish magazine publishing context. In the empirical part, there is a comparative case study of four magazines. The findings indicate that with cooperating, advertising and community hosting capabilities magazines may utilize their human, brand, content and customer base resources. The resources can be further addressed to reach profitability, customer-centricity and brand congruency goals.
Resumo:
Agile software development has grown in popularity starting from the agile manifesto declared in 2001. However there is a strong belief that the agile methods are not suitable for embedded, critical or real-time software development, even though multiple studies and cases show differently. This thesis will present a custom agile process that can be used in embedded software development. The reasons for presumed unfitness of agile methods in embedded software development have mainly based on the feeling of these methods providing no real control, no strict discipline and less rigor engineering practices. One starting point is to provide a light process with disciplined approach to the embedded software development. Agile software development has gained popularity due to the fact that there are still big issues in software development as a whole. Projects fail due to schedule slips, budget surpassing or failing to meet the business needs. This does not change when talking about embedded software development. These issues are still valid, with multiple new ones rising from the quite complex and hard domain the embedded software developers work in. These issues are another starting point for this thesis. The thesis is based heavily on Feature Driven Development, a software development methodology that can be seen as a runner up to the most popular agile methodologies. The FDD as such is quite process oriented and is lacking few practices considered commonly as extremely important in agile development methodologies. In order for FDD to gain acceptance in the software development community it needs to be modified and enhanced. This thesis presents an improved custom agile process that can be used in embedded software development projects with size varying from 10 to 500 persons. This process is based on Feature Driven Development and by suitable parts to Extreme Programming, Scrum and Agile Modeling. Finally this thesis will present how the new process responds to the common issues in the embedded software development. The process of creating the new process is evaluated at the retrospective and guidelines for such process creation work are introduced. These emphasize the agility also in the process development through early and frequent deliveries and the team work needed to create suitable process.
Resumo:
Western societies have been faced with the fact that overweight, impaired glucose regulation and elevated blood pressure are already prevalent in pediatric populations. This will inevitably mean an increase in later manifestations of cardio-metabolic diseases. The dilemma has been suggested to stem from fetal life and it is surmised that the early nutritional environment plays an important role in the process called programming. The aim of the present study was to characterize early nutritional determinants associating with cardio-metabolic risk factors in fetuses, infants and children. Further, the study was designated to establish whether dietary counseling initiated in early pregnancy can modify this cascade. Healthy mother-child pairs (n=256) participating in a dietary intervention study were followed from early pregnancy to childhood. The intervention included detailed dietary counseling by a nutritionist targeting saturated fat intake in excess of recommendations and fiber consumption below recommendations. Cardio-metabolic programming was studied by characterizing the offspring’s cardio-metabolic risk factors such as over-activation of the autonomic nervous system, elevated blood pressure and adverse metabolic status (e.g. serum high split proinsulin concentration). Fetal cardiac sympathovagal activation was measured during labor. Postnatally, children’s blood pressure was measured at six-month and four-year follow-up visits. Further, infants’ metabolic status was assessed by means of growth and serum biomarkers (32-33 split proinsulin, leptin and adiponectin) at the age of six months. This study proved that fetal cardiac sympathovagal activity was positively associated with maternal pre-pregnancy body mass index indicating adverse cardio-metabolic programming in the offspring. Further, a reduced risk of high split proinsulin in infancy and lower blood pressure in childhood were found in those offspring whose mothers’ weight gain and amount and type of fats in the diet during pregnancy were as recommended. Of note, maternal dietary counseling from early pregnancy onwards could ameliorate the offspring’s metabolic status by reducing the risk of high split proinsulin concentration, although it had no effect on the other cardio-metabolic markers in the offspring. At postnatal period breastfeeding proved to entail benefits in cardio-metabolic programming. Finally, the recommended dietary protein and total fat content in the child’s diet were important nutritional determinants reducing blood pressure at the age of four years. The intrauterine and immediate postnatal period comprise a window of opportunity for interventions aiming to reduce the risk of cardio-metabolic disorders and brings the prospect of achieving health benefits over one generation.
Resumo:
The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at Åbo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.
Resumo:
Programming and mathematics are core areas of computer science (CS) and consequently also important parts of CS education. Introductory instruction in these two topics is, however, not without problems. Studies show that CS students find programming difficult to learn and that teaching mathematical topics to CS novices is challenging. One reason for the latter is the disconnection between mathematics and programming found in many CS curricula, which results in students not seeing the relevance of the subject for their studies. In addition, reports indicate that students' mathematical capability and maturity levels are dropping. The challenges faced when teaching mathematics and programming at CS departments can also be traced back to gaps in students' prior education. In Finland the high school curriculum does not include CS as a subject; instead, focus is on learning to use the computer and its applications as tools. Similarly, many of the mathematics courses emphasize application of formulas, while logic, formalisms and proofs, which are important in CS, are avoided. Consequently, high school graduates are not well prepared for studies in CS. Motivated by these challenges, the goal of the present work is to describe new approaches to teaching mathematics and programming aimed at addressing these issues: Structured derivations is a logic-based approach to teaching mathematics, where formalisms and justifications are made explicit. The aim is to help students become better at communicating their reasoning using mathematical language and logical notation at the same time as they become more confident with formalisms. The Python programming language was originally designed with education in mind, and has a simple syntax compared to many other popular languages. The aim of using it in instruction is to address algorithms and their implementation in a way that allows focus to be put on learning algorithmic thinking and programming instead of on learning a complex syntax. Invariant based programming is a diagrammatic approach to developing programs that are correct by construction. The approach is based on elementary propositional and predicate logic, and makes explicit the underlying mathematical foundations of programming. The aim is also to show how mathematics in general, and logic in particular, can be used to create better programs.
Resumo:
Avhandlingen behandlar entreprenöriella intentioner och individens uppfattningar om entreprenörskap. Om vi vill främja entreprenörskap så räcker det inte att vi förstår vilken nytta samhället kan ha av entreprenörer (arbetsplatser, mera skatteinkomster osv.). Vi måste förstå varför entreprenörskap är intressant och attraktiv ur individens synvinkel. Just den frågan har varit central inom kognitiv entreprenörskapsforskning de senaste 10 åren har vår förståelse för entreprenörer ökat betydligt tack vare den forskningen. Problemet med existerande forskning är att uppfattad genomförbarhet och uppfattad attraktivitet, dvs. de attityder som sägs leda till entreprenöriella intentioner, beskriver enbart vilken attityd individen generellt har till entreprenörskap. Enligt tidigare forskningsresultat så är det skillnad på generella attityder till en handling och attityder till att genomföra just den handlingen. Vill vi veta om individen kan tänka sig starta och driva ett företag så måste vi alltså studera individens attityd till att utföra just den specifika handlingen. Enligt avhandlingens forskningsresultat så kan vi lära oss mera om attityder till entreprenörskap genom att studera också motivation och mål. På så sätt kan vi förstå varför en del väljer att bli entreprenörer medan andra väljer att låta bli, även om de utåt sett har samma möjligheter att bli entreprenörer.
Resumo:
Background. Multiple myeloma (MM) is the second most common hematologic malignancy after lymphomas In Finland: the annual incidence of MM is approximately 200. For three decades the median survival remained at 3 to 4 years from diagnosis until high-dose melphalan treatment supported by autologous stem cell transplantation (ASCT) became the standard of care for newly diagnosed MM since the mid 1990’s and the median survival increased to 5 – 6 years. This study focuses on three important aspects of ASCT, namely 1) stem cell mobilization, 2) single vs. double ASCT as initial treatment, and 3) the role of minimal residual disease (MRD) for longterm outcome. Aim. The aim of this series of studies was to evaluate the outcomes of MM patients and the ASCT procedure at the Turku University Central Hospital, Finland. First, we tried to identify which factors predict unsuccessful mobilization of autologous stem cells. Second, we compared the use of short-acting granulocyte-colony stimulating factor (GCSF) with long-acting G-CSF as mobilization agents. Third, one and two successive ASCTs were compared in 100 patients with MM. Fourth, for patients in complete response (CR) after stem cell transplantation (SCT), patient-specific probes for quantitative allele-specific oligonucleotide polymerase-chain reaction (qASO-PCR) measurements were designed to evaluate MRD and its importance for long-term outcome. Results. The quantity of previous chemotherapy and previous interferon use were significant pre-mobilization factors that predicted mobilization failure, together with some factors related to mobilization therapy itself, such as duration and degree of cytopenias and occurrence of sepsis. Short-acting and long-acting G-CSF combined with chemotherapy were comparable as stem cells mobilizers. The progression free (PFS) and overall survival (OS) tended to be longer after double ASCT than after single ASCT with a median follow-up time of 4 years, but this difference disappeared as the follow-up time increased. qASO-PCR was a good and sensitive divider of the CR patients into two prognostic groups: MRD low/negative (≤ 0.01%) and MRD high (>0.01%) groups with a significant difference in PFS and suggestively also in OS. Conclusions. When the factors prediciting a poor outcome of stem cell mobilization prevail, it is possible to identify those patients who need specific efforts to maximize the mobilization efficacy. Long-acting pegfilgrastim is a practical and effective alternative to short-acting filgrastim for mobilization therapy. There is no need to perform double ASCT on all eligible patients. MRD assessment with qASO-PCR is a sensitive method for evaluation of the depth of the CR response and can be used to predict long-term outcome after ACST.