972 resultados para programming model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Underwater sound is very important in the field of oceanography where it is used for remote sensing in much the same way that radar is used in atmospheric studies. One way to mathematically model sound propagation in the ocean is by using the parabolic-equation method, a technique that allows range dependent environmental parameters. More importantly, this method can model sound transmission where the source emits either a pure tone or a short pulse of sound. Based on the parabolic approximation method and using the split-step Fourier algorithm, a computer model for underwater sound propagation was designed and implemented. This computer model differs from previous models in its use of the interactive mode, structured programming, modular design, and state-of-the-art graphics displays. In addition, the model maximizes the efficiency of computer time through synchronization of loosely coupled dual processors and the design of a restart capability. Since the model is designed for adaptability and for users with limited computer skills, it is anticipated that it will have many applications in the scientific community.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to develop a practical, versatile and fast dosimetry and radiobiological model for calculation of the 3D dose distribution and radiobiological effectiveness of radioactive stents. The algorithm was written in Matlab 6.5 programming language and is based on the dose point kernel convolution. The dosimetry and radiobiological model was applied for evaluation of the 3D dose distribution of 32P, 90Y, 188Re and 177Lu stents. Of the four, 32P delivers the highest dose, while 90Y, 188Re and 177Lu require high levels of activity to deliver a significant therapeutic dose in the range of 15-30 Gy. Results of the radiobiological model demonstrated that the same physical dose delivered by different radioisotopes produces significantly different radiobiological effects. This type of theoretical dose calculation can be useful in the development of new stent designs, the planning of animal studies and clinical trials, and clinical decisions involving individualized treatment plans.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nucleic Acid hairpins have been a subject of study for the last four decades. They are composed of single strand that is

hybridized to itself, and the central section forming an unhybridized loop. In nature, they stabilize single stranded RNA, serve as nucleation

sites for RNA folding, protein recognition signals, mRNA localization and regulation of mRNA degradation. On the other hand,

DNA hairpins in biological contexts have been studied with respect to forming cruciform structures that can regulate gene expression.

The use of DNA hairpins as fuel for synthetic molecular devices, including locomotion, was proposed and experimental demonstrated in 2003. They

were interesting because they bring to the table an on-demand energy/information supply mechanism.

The energy/information is hidden (from hybridization) in the hairpin’s loop, until required.

The energy/information is harnessed by opening the stem region, and exposing the single stranded loop section.

The loop region is now free for possible hybridization and help move the system into a thermodynamically favourable state.

The hidden energy and information coupled with

programmability provides another functionality, of selectively choosing what reactions to hide and

what reactions to allow to proceed, that helps develop a topological sequence of events.

Hairpins have been utilized as a source of fuel for many different DNA devices. In this thesis, we program four different

molecular devices using DNA hairpins, and experimentally validate them in the

laboratory. 1) The first device: A

novel enzyme-free autocatalytic self-replicating system composed entirely of DNA that operates isothermally. 2) The second

device: Time-Responsive Circuits using DNA have two properties: a) asynchronous: the final output is always correct

regardless of differences in the arrival time of different inputs.

b) renewable circuits which can be used multiple times without major degradation of the gate motifs

(so if the inputs change over time, the DNA-based circuit can re-compute the output correctly based on the new inputs).

3) The third device: Activatable tiles are a theoretical extension to the Tile assembly model that enhances

its robustness by protecting the sticky sides of tiles until a tile is partially incorporated into a growing assembly.

4) The fourth device: Controlled Amplification of DNA catalytic system: a device such that the amplification

of the system does not run uncontrollably until the system runs out of fuel, but instead achieves a finite

amount of gain.

Nucleic acid circuits with the ability

to perform complex logic operations have many potential practical applications, for example the ability to achieve point of care diagnostics.

We discuss the designs of our DNA Hairpin molecular devices, the results we have obtained, and the challenges we have overcome

to make these truly functional.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In computer vision, training a model that performs classification effectively is highly dependent on the extracted features, and the number of training instances. Conventionally, feature detection and extraction are performed by a domain-expert who, in many cases, is expensive to employ and hard to find. Therefore, image descriptors have emerged to automate these tasks. However, designing an image descriptor still requires domain-expert intervention. Moreover, the majority of machine learning algorithms require a large number of training examples to perform well. However, labelled data is not always available or easy to acquire, and dealing with a large dataset can dramatically slow down the training process. In this paper, we propose a novel Genetic Programming based method that automatically synthesises a descriptor using only two training instances per class. The proposed method combines arithmetic operators to evolve a model that takes an image and generates a feature vector. The performance of the proposed method is assessed using six datasets for texture classification with different degrees of rotation, and is compared with seven domain-expert designed descriptors. The results show that the proposed method is robust to rotation, and has significantly outperformed, or achieved a comparable performance to, the baseline methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conventional taught learning practices often experience difficulties in keeping students motivated and engaged. Video games, however, are very successful at sustaining high levels of motivation and engagement through a set of tasks for hours without apparent loss of focus. In addition, gamers solve complex problems within a gaming environment without feeling fatigue or frustration, as they would typically do with a comparable learning task. Based on this notion, the academic community is keen on exploring methods that can deliver deep learner engagement and has shown increased interest in adopting gamification – the integration of gaming elements, mechanics, and frameworks into non-game situations and scenarios – as a means to increase student engagement and improve information retention. Its effectiveness when applied to education has been debatable though, as attempts have generally been restricted to one-dimensional approaches such as transposing a trivial reward system onto existing teaching materials and/or assessments. Nevertheless, a gamified, multi-dimensional, problem-based learning approach can yield improved results even when applied to a very complex and traditionally dry task like the teaching of computer programming, as shown in this paper. The presented quasi-experimental study used a combination of instructor feedback, real time sequence of scored quizzes, and live coding to deliver a fully interactive learning experience. More specifically, the “Kahoot!” Classroom Response System (CRS), the classroom version of the TV game show “Who Wants To Be A Millionaire?”, and Codecademy’s interactive platform formed the basis for a learning model which was applied to an entry-level Python programming course. Students were thus allowed to experience multiple interlocking methods similar to those commonly found in a top quality game experience. To assess gamification’s impact on learning, empirical data from the gamified group were compared to those from a control group who was taught through a traditional learning approach, similar to the one which had been used during previous cohorts. Despite this being a relatively small-scale study, the results and findings for a number of key metrics, including attendance, downloading of course material, and final grades, were encouraging and proved that the gamified approach was motivating and enriching for both students and instructors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper the problem of the evolution of an object-oriented database in the context of orthogonal persistent programming systems is addressed. We have observed two characteristics in that type of systems that offer particular conditions to implement the evolution in a semi-transparent fashion. That transparency can further be enhanced with the obliviousness provided by the Aspect-Oriented Programming techniques. Was conceived a meta-model and developed a prototype to test the feasibility of our approach. The system allows programs, written to a schema, access semi-transparently to data in other versions of the schema.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces the stochastic version of the Geometric Machine Model for the modelling of sequential, alternative, parallel (synchronous) and nondeterministic computations with stochastic numbers stored in a (possibly infinite) shared memory. The programming language L(D! 1), induced by the Coherence Space of Processes D! 1, can be applied to sequential and parallel products in order to provide recursive definitions for such processes, together with a domain-theoretic semantics of the Stochastic Arithmetic. We analyze both the spacial (ordinal) recursion, related to spacial modelling of the stochastic memory, and the temporal (structural) recursion, given by the inclusion relation modelling partial objects in the ordered structure of process construction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the temperature of a pilot-scale batch reaction system is modeled towards the design of a controller based on the explicit model predictive control (EMPC) strategy -- Some mathematical models are developed from experimental data to describe the system behavior -- The simplest, yet reliable, model obtained is a (1,1,1)-order ARX polynomial model for which the mentioned EMPC controller has been designed -- The resultant controller has a reduced mathematical complexity and, according to the successful results obtained in simulations, will be used directly on the real control system in a next stage of the entire experimental framework

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an IP-based nonparametric (revealed preference) testing procedure for rational consumption behavior in terms of general collective models, which include consumption externalities and public consumption. An empirical application to data drawn from the Russia Longitudinal Monitoring Survey (RLMS) demonstrates the practical usefulness of the procedure. Finally, we present extensions of the testing procedure to evaluate the goodness-of-…t of the collective model subject to testing, and to quantify and improve the power of the corresponding collective rationality tests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Applications are subject of a continuous evolution process with a profound impact on their underlining data model, hence requiring frequent updates in the applications' class structure and database structure as well. This twofold problem, schema evolution and instance adaptation, usually known as database evolution, is addressed in this thesis. Additionally, we address concurrency and error recovery problems with a novel meta-model and its aspect-oriented implementation. Modern object-oriented databases provide features that help programmers deal with object persistence, as well as all related problems such as database evolution, concurrency and error handling. In most systems there are transparent mechanisms to address these problems, nonetheless the database evolution problem still requires some human intervention, which consumes much of programmers' and database administrators' work effort. Earlier research works have demonstrated that aspect-oriented programming (AOP) techniques enable the development of flexible and pluggable systems. In these earlier works, the schema evolution and the instance adaptation problems were addressed as database management concerns. However, none of this research was focused on orthogonal persistent systems. We argue that AOP techniques are well suited to address these problems in orthogonal persistent systems. Regarding the concurrency and error recovery, earlier research showed that only syntactic obliviousness between the base program and aspects is possible. Our meta-model and framework follow an aspect-oriented approach focused on the object-oriented orthogonal persistent context. The proposed meta-model is characterized by its simplicity in order to achieve efficient and transparent database evolution mechanisms. Our meta-model supports multiple versions of a class structure by applying a class versioning strategy. Thus, enabling bidirectional application compatibility among versions of each class structure. That is to say, the database structure can be updated because earlier applications continue to work, as well as later applications that have only known the updated class structure. The specific characteristics of orthogonal persistent systems, as well as a metadata enrichment strategy within the application's source code, complete the inception of the meta-model and have motivated our research work. To test the feasibility of the approach, a prototype was developed. Our prototype is a framework that mediates the interaction between applications and the database, providing them with orthogonal persistence mechanisms. These mechanisms are introduced into applications as an {\it aspect} in the aspect-oriented sense. Objects do not require the extension of any super class, the implementation of an interface nor contain a particular annotation. Parametric type classes are also correctly handled by our framework. However, classes that belong to the programming environment must not be handled as versionable due to restrictions imposed by the Java Virtual Machine. Regarding concurrency support, the framework provides the applications with a multithreaded environment which supports database transactions and error recovery. The framework keeps applications oblivious to the database evolution problem, as well as persistence. Programmers can update the applications' class structure because the framework will produce a new version for it at the database metadata layer. Using our XML based pointcut/advice constructs, the framework's instance adaptation mechanism is extended, hence keeping the framework also oblivious to this problem. The potential developing gains provided by the prototype were benchmarked. In our case study, the results confirm that mechanisms' transparency has positive repercussions on the programmer's productivity, simplifying the entire evolution process at application and database levels. The meta-model itself also was benchmarked in terms of complexity and agility. Compared with other meta-models, it requires less meta-object modifications in each schema evolution step. Other types of tests were carried out in order to validate prototype and meta-model robustness. In order to perform these tests, we used an OO7 small size database due to its data model complexity. Since the developed prototype offers some features that were not observed in other known systems, performance benchmarks were not possible. However, the developed benchmark is now available to perform future performance comparisons with equivalent systems. In order to test our approach in a real world scenario, we developed a proof-of-concept application. This application was developed without any persistence mechanisms. Using our framework and minor changes applied to the application's source code, we added these mechanisms. Furthermore, we tested the application in a schema evolution scenario. This real world experience using our framework showed that applications remains oblivious to persistence and database evolution. In this case study, our framework proved to be a useful tool for programmers and database administrators. Performance issues and the single Java Virtual Machine concurrent model are the major limitations found in the framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Processors with large numbers of cores are becoming commonplace. In order to utilise the available resources in such systems, the programming paradigm has to move towards increased parallelism. However, increased parallelism does not necessarily lead to better performance. Parallel programming models have to provide not only flexible ways of defining parallel tasks, but also efficient methods to manage the created tasks. Moreover, in a general-purpose system, applications residing in the system compete for the shared resources. Thread and task scheduling in such a multiprogrammed multithreaded environment is a significant challenge. In this thesis, we introduce a new task-based parallel reduction model, called the Glasgow Parallel Reduction Machine (GPRM). Our main objective is to provide high performance while maintaining ease of programming. GPRM supports native parallelism; it provides a modular way of expressing parallel tasks and the communication patterns between them. Compiling a GPRM program results in an Intermediate Representation (IR) containing useful information about tasks, their dependencies, as well as the initial mapping information. This compile-time information helps reduce the overhead of runtime task scheduling and is key to high performance. Generally speaking, the granularity and the number of tasks are major factors in achieving high performance. These factors are even more important in the case of GPRM, as it is highly dependent on tasks, rather than threads. We use three basic benchmarks to provide a detailed comparison of GPRM with Intel OpenMP, Cilk Plus, and Threading Building Blocks (TBB) on the Intel Xeon Phi, and with GNU OpenMP on the Tilera TILEPro64. GPRM shows superior performance in almost all cases, only by controlling the number of tasks. GPRM also provides a low-overhead mechanism, called “Global Sharing”, which improves performance in multiprogramming situations. We use OpenMP, as the most popular model for shared-memory parallel programming as the main GPRM competitor for solving three well-known problems on both platforms: LU factorisation of Sparse Matrices, Image Convolution, and Linked List Processing. We focus on proposing solutions that best fit into the GPRM’s model of execution. GPRM outperforms OpenMP in all cases on the TILEPro64. On the Xeon Phi, our solution for the LU Factorisation results in notable performance improvement for sparse matrices with large numbers of small blocks. We investigate the overhead of GPRM’s task creation and distribution for very short computations using the Image Convolution benchmark. We show that this overhead can be mitigated by combining smaller tasks into larger ones. As a result, GPRM can outperform OpenMP for convolving large 2D matrices on the Xeon Phi. Finally, we demonstrate that our parallel worksharing construct provides an efficient solution for Linked List processing and performs better than OpenMP implementations on the Xeon Phi. The results are very promising, as they verify that our parallel programming framework for manycore processors is flexible and scalable, and can provide high performance without sacrificing productivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Placenta, as the sole transport mechanism between mother and fetus, links the maternal physical state and the immediate and life-long outcomes of the offspring. The present study examined the mechanisms behind the effect of maternal obesity on placental lipid accumulation and metabolism. Pregnant Obese Prone (OP) and Obese Resistant (OR) rat strains were fed a control diet throughout gestation. Placentas were collected on gestational d21 for analysis and frozen placental sections were analyzed for fat accumulation as well as β-Catenin and Dkk1 localization. Additionally, DKK1 was overexpressed in JEG3 trophoblast cells, followed by treatment with NEFA and Oil Red O stain quantification and mRNA analysis to determine the relationship between placental DKK1 and lipid accumulation. Maternal plasma and placental NEFA and TG were elevated in OP dams, and offspring of OP dams were smaller than OR. Placental Dkk1 mRNA content was 4-fold lower in OP placentas, and there was a significant increase in β-Catenin accumulation as well as mRNA content of fat transport and TG synthesis enzymes, including Ppar-delta, Fatp1, Fat/Cd36, Lipin1, and Lipin3. There was significant lipid accumulation within the decidual zones in OP but not OR placentas, and the thickness of the decidual and junctional zones was significantly smaller in OP than OR placentas. Overexpression of DKK1 in JEG3 cells decreased lipid accumulation and the mRNA content of PPAR-Delta, FATP1, FAT/CD36, LIPIN1, and LIPIN3. Our results indicate that Dkk1 may be regulating placental lipid metabolism through Wnt-mediated mechanisms. Additionally, recent studies have suggested that maternal obesity may also program early development of non-alcoholic fatty liver disease (NAFLD), rates of which have correlated with the increase in the obesity epidemic. In the current study, livers of OP offspring had significantly increased TG content (P<0.05) and lipid accumulation when compared to offspring of OR dams. Additionally, hepatic Dkk1 mRNA content was significantly decreased in OP livers when compared to OR (P<0.05), and treating H4IIECR rat hepatocyte cells with NEFA showed that Dkk1 mRNA was also decreased in NEFA-treated cells (P<0.05) that also had lipid accumulation. Chromatin Immunoprecipitation (ChIP) analysis of the Dkk1 promoter in fetal livers showed a pattern of histone modifications associated with decreased gene transcription in OP offspring, which agrees with our gene expression data. These results demonstrate that the hepatic Dkk1 gene is epigenetically regulated via histone modification in neonatal offspring in the current model of gestational obesity, and future studies will be needed to determine whether these changes contribute to excessive hepatic lipid accumulation in offspring of obese dams.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper provides a new reading of a classical economic relation: the short-run Phillips curve. Our point is that, when dealing with inflation and unemployment, policy-making can be understood as a multicriteria decisionmaking problem. Hence, we use so-called multiobjective programming in connection with a computable general equilibrium (CGE) model to determine the combinations of policy instruments that provide efficient combinations of inflation and unemployment. This approach results in an alternative version of the Phillips curve labelled as efficient Phillips curve. Our aim is to present an application of CGE models to a new area of research that can be especially useful when addressing policy exercises with real data. We apply our methodological proposal within a particular regional economy, Andalusia, in the south of Spain. This tool can give some keys for policy advice and policy implementation in the fight against unemployment and inflation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.