834 resultados para OPTIMIZED DYNAMICAL REPRESENTATION


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work is devoted to the study of the dynamical and structural properties of dendrimers. Different approaches were used: analytical theory, computer simulation results and experimental NMR studies. The theory of the relaxation spectrum of dendrimer macromolecules was developed. Relaxation processes which are manifest in the local orientational mobility of dendrimer macromolecules were established and studied in detail. Theoretical results and conclusions were used for experimental studies of carbosilane dendimers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper is devoted to an analysis of some aspects of Bas van Fraassen's views on representation. While I agree with most of his claims, I disagree on the following three issues. Firstly, I contend that some isomorphism (or at least homomorphism) between the representor and what is represented is a universal necessary condition for the success of any representation, even in the case of misrepresentation. Secondly, I argue that the so-called "semantic" or "model-theoretic" construal of theories does not give proper due to the role played by true propositions in successful representing practices. Thirdly, I attempt to show that the force of van Fraassen's pragmatic - and antirealist - "dissolution" of the "loss of reality objection" loses its bite when we realize that our cognitive contact with real phenomena is achieved not by representing but by expressing true propositions about them.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

"Helmiä sioille", pärlor för svin, säger man på finska om någonting bra och fint som tas emot av en mottagare som inte vill eller har ingen förmåga att förstå, uppskatta eller utnyttja hela den potential som finns hos det mottagna föremålet, är ointresserad av den eller gillar den inte. För sådana relativt stabila flerordiga uttryck, som är lagrade i språkbrukarnas minnen och som demonstrerar olika slags oregelbundna drag i sin struktur använder man inom lingvistiken bl.a. termerna "idiom" eller "fraseologiska enheter". Som en oregelbundenhet kan man t.ex. beskriva det faktum att betydelsen hos uttrycket inte är densamma som man skulle komma till ifall man betraktade det som en vanlig regelbunden fras. En annan oregelbundenhet, som idiomforskare har observerat, ligger i den begränsade förmågan att varieras i form och betydelse, som många idiom har jämfört med regelbundna fraser. Därför talas det ofta om "grundform" och "grundbetydelse" hos idiom och variationen avses som avvikelse från dessa. Men när man tittar på ett stort antal förekomstexempel av idiom i språkbruk, märker man att många av dem tillåter variation, t.o.m. i sådan utsträckning att gränserna mellan en variant och en "grundform" suddas ut, och istället för ett idiom råkar vi plötsligt på en "familj" av flera besläktade uttryck. Allt detta väcker frågan om hur dessa uttryck egentligen ska vara representerade i språket. I avhandlingen utförs en kritisk granskning av olika tidigare tillvägagångssätt att beskriva fraseologiska enheter i syfte att klargöra vilka svårigheter deras struktur och variation erbjuder för den lingvistiska teorin. Samtidigt presenteras ett alternativt sätt att beskriva dessa uttryck. En systematisk och formell modell som utvecklas i denna avhandling integrerar en beskrivning av idiom på många olika språkliga nivåer och skildrar deras variation i form av ett nätverk och som ett resultat av samspel mellan idiomets struktur och kontexter där det förekommer, samt av interaktion med andra fasta uttryck. Modellen bygger på en fördjupande, språkbrukbaserad analys av det finska idiomet "X HEITTÄÄ HELMIÄ SIOILLE" (X kastar pärlor för svin).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mathematical models often contain parameters that need to be calibrated from measured data. The emergence of efficient Markov Chain Monte Carlo (MCMC) methods has made the Bayesian approach a standard tool in quantifying the uncertainty in the parameters. With MCMC, the parameter estimation problem can be solved in a fully statistical manner, and the whole distribution of the parameters can be explored, instead of obtaining point estimates and using, e.g., Gaussian approximations. In this thesis, MCMC methods are applied to parameter estimation problems in chemical reaction engineering, population ecology, and climate modeling. Motivated by the climate model experiments, the methods are developed further to make them more suitable for problems where the model is computationally intensive. After the parameters are estimated, one can start to use the model for various tasks. Two such tasks are studied in this thesis: optimal design of experiments, where the task is to design the next measurements so that the parameter uncertainty is minimized, and model-based optimization, where a model-based quantity, such as the product yield in a chemical reaction model, is optimized. In this thesis, novel ways to perform these tasks are developed, based on the output of MCMC parameter estimation. A separate topic is dynamical state estimation, where the task is to estimate the dynamically changing model state, instead of static parameters. For example, in numerical weather prediction, an estimate of the state of the atmosphere must constantly be updated based on the recently obtained measurements. In this thesis, a novel hybrid state estimation method is developed, which combines elements from deterministic and random sampling methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ABSTRACT This study aimed to compare thematic maps of soybean yield for different sampling grids, using geostatistical methods (semivariance function and kriging). The analysis was performed with soybean yield data in t ha-1 in a commercial area with regular grids with distances between points of 25x25 m, 50x50 m, 75x75 m, 100x100 m, with 549, 188, 66 and 44 sampling points respectively; and data obtained by yield monitors. Optimized sampling schemes were also generated with the algorithm called Simulated Annealing, using maximization of the overall accuracy measure as a criterion for optimization. The results showed that sample size and sample density influenced the description of the spatial distribution of soybean yield. When the sample size was increased, there was an increased efficiency of thematic maps used to describe the spatial variability of soybean yield (higher values of accuracy indices and lower values for the sum of squared estimation error). In addition, more accurate maps were obtained, especially considering the optimized sample configurations with 188 and 549 sample points.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article discusses three possible ways to derive time domain boundary integral representations for elastodynamics. This discussion points out possible difficulties found when using those formulations to deal with practical applications. The discussion points out recommendations to select the convenient integral representation to deal with elastodynamic problems and opens the possibility of deriving simplified schemes. The proper way to take into account initial conditions applied to the body is an interesting topict shown. It illustrates the main differences between the discussed boundary integral representation expressions, their singularities and possible numerical problems. The correct way to use collocation points outside the analyzed domain is carefully described. Some applications are shown at the end of the paper, in order to demonstrate the capabilities of the technique when properly used.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It has been shown for several DNA probes that the recently introduced Fast-FISH (fluorescence in situ hybridization) technique is well suited for quantitative microscopy. For highly repetitive DNA probes the hybridization (renaturation) time and the number of subsequent washing steps were reduced considerably by omitting denaturing chemical agents (e.g., formamide). The appropriate hybridization temperature and time allow a clear discrimination between major and minor binding sites by quantitative fluorescence microscopy. The well-defined physical conditions for hybridization permit automatization of the procedure, e.g., by a programmable thermal cycler. Here, we present optimized conditions for a commercially available X-specific a-satellite probe. Highly fluorescent major binding sites were obtained for 74oC hybridization temperature and 60 min hybridization time. They were clearly discriminated from some low fluorescent minor binding sites on metaphase chromosomes as well as in interphase cell nuclei. On average, a total of 3.43 ± 1.59 binding sites were measured in metaphase spreads, and 2.69 ± 1.00 in interphase nuclei. Microwave activation for denaturation and hybridization was tested to accelerate the procedure. The slides with the target material and the hybridization buffer were placed in a standard microwave oven. After denaturation for 20 s at 900 W, hybridization was performed for 4 min at 90 W. The suitability of a microwave oven for Fast-FISH was confirmed by the application to a chromosome 1-specific a-satellite probe. In this case, denaturation was performed at 630 W for 60 s and hybridization at 90 W for 5 min. In all cases, the results were analyzed quantitatively and compared to the results obtained by Fast-FISH. The major binding sites were clearly discriminated by their brightness

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This doctoral dissertation presents studies of the formation and evolution of galaxies, through observations and simulations of galactic halos. The halo is the component of galaxies which hosts some of the oldest objects we know of in the cosmos; it is where clues to the history of galaxies are found, for example, by how the chemical structure is related to the dynamics of objects in the halo. The dynamical and chemical structure of halos, both in the Milky Way’s own halo, and in two elliptical galaxies, is the underlying theme in the research. I focus on the density falloff and chemistry of the two external halos, and on the dynamics, density falloff, and chemistry of the Milky Way halo. I first study galactic halos via computer simulations, to test the long- term stability of an anomalous feature recently found in kinematics of the Milky Way’s metal-poor stellar halo. I find that the feature is transient, making its origin unclear. I use a second set of simulations to test if an initially strong relation between the dynamics and chemistry of halo glob-ular clusters in a Milky Way-type galaxy is affected by a merging satellite galaxy, and find that the relation remains strong despite a merger in which the satellite is a third of the mass of the host galaxy. From simulations, I move to observing halos in nearby galaxies, a challenging procedure as most of the light from galaxies comes from the disk and bulge components as opposed to the halo. I use Hubble Space Tele scope observations of the halo of the galaxy M87 and, comparing to similar observations of NGC 5128, find that the chemical structure of the inner halo is similar for both of these giant elliptical galaxies. I use Very Large Telescope observations of the outer halo of NGC 5128 (Centaurus A) and, because of the difficultly in resolving dim extragalac- tic stellar halo populations, I introduce a new technique to subtract the contaminating background galaxies. A transition from a metal-rich stellar halo to a metal-poor has previously been discovered in two different types of galaxies, the disk galaxy M31 and the classic elliptical NGC 3379. Unexpectedly, I discover in this third type of galaxy, the merger remnant NGC 5128, that the density of metal-rich and metal-poor halo stars falls at the same rate within the galactocentric radii of 8 − 65 kpc, the limit of our observations. This thesis presents new results which open opportunities for future investigations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The goal of this thesis is to estimate the effect of the form of knowledge representation on the efficiency of knowledge sharing. The objectives include the design of an experimental framework which would allow to establish this effect, data collection, and statistical analysis of the collected data. The study follows the experimental quantitative design. The experimental questionnaire features three sample forms of knowledge: text, mind maps, concept maps. In the interview, these forms are presented to an interviewee, afterwards the knowledge sharing time and knowledge sharing quality are measured. According to the statistical analysis of 76 interviews, text performs worse in both knowledge sharing time and quality compared to visualized forms of knowledge representation. However, mind maps and concept maps do not differ in knowledge sharing time and quality, since this difference is not statistically significant. Since visualized structured forms of knowledge perform better than unstructured text in knowledge sharing, it is advised for companies to foster the usage of these forms in knowledge sharing processes inside the company. Aside of performance in knowledge sharing, the visualized structured forms are preferable due the possibility of their usage in the system of ontological knowledge management within an enterprise.

Relevância:

20.00% 20.00%

Publicador: