919 resultados para Many-to-many-assignment problem
Resumo:
Software product line (SPL) engineering offers several advantages in the development of families of software products such as reduced costs, high quality and a short time to market. A software product line is a set of software intensive systems, each of which shares a common core set of functionalities, but also differs from the other products through customization tailored to fit the needs of individual groups of customers. The differences between products within the family are well-understood and organized into a feature model that represents the variability of the SPL. Products can then be built by generating and composing features described in the feature model. Testing of software product lines has become a bottleneck in the SPL development lifecycle, since many of the techniques used in their testing have been borrowed from traditional software testing and do not directly take advantage of the similarities between products. This limits the overall gains that can be achieved in SPL engineering. Recent work proposed by both industry and the research community for improving SPL testing has begun to consider this problem, but there is still a need for better testing techniques that are tailored to SPL development. In this thesis, I make two primary contributions to software product line testing. First I propose a new definition for testability of SPLs that is based on the ability to re-use test cases between products without a loss of fault detection effectiveness. I build on this idea to identify elements of the feature model that contribute positively and/or negatively towards SPL testability. Second, I provide a graph based testing approach called the FIG Basis Path method that selects products and features for testing based on a feature dependency graph. This method should increase our ability to re-use results of test cases across successive products in the family and reduce testing effort. I report the results of a case study involving several non-trivial SPLs and show that for these objects, the FIG Basis Path method is as effective as testing all products, but requires us to test no more than 24% of the products in the SPL.
Resumo:
The Carr-Purcell-Meiboom-Gill (CPMG) pulse sequence has been used in many applications of magnetic resonance imaging (MRI) and low-resolution NMR (LRNMR) spectroscopy. Recently. CPMG was used in online LRNMR measurements that use long RF pulse trains, causing an increase in probe temperature and, therefore, tuning and matching maladjustments. To minimize this problem, the use of a low-power CPMG sequence based on low refocusing pulse flip angles (LRFA) was studied experimentally and theoretically. This approach has been used in several MRI protocols to reduce incident RF power and meet the specific absorption rate. The results for CPMG with LRFA of 3 pi/4 (CPMG(135)), pi/2 (CPMG(90)) and pi/4 (CPMG(45)) were compared with conventional CPMG with refocusing pi pulses. For a homogeneous field, with linewidth equal to Delta nu = 15 Hz, the refocusing flip angles can be as low as pi/4 to obtain the transverse relaxation time (T(2)) value with errors below 5%. For a less homogeneous magnetic field. Delta nu = 100 Hz, the choice of the LRFA has to take into account the reduction in the intensity of the CPMG signal and the increase in the time constant of the CPMG decay that also becomes dependent on longitudinal relaxation time (T(1)). We have compared the T(2) values measured by conventional CPMG and CPMG(90) for 30 oilseed species, and a good correlation coefficient, r = 0.98, was obtained. Therefore, for oilseeds, the T(2) measurements performed with pi/2 refocusing pulses (CPMG(90)), with the same pulse width of conventional CPMG, use only 25% of the RF power. This reduces the heating problem in the probe and reduces the power deposition in the samples. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Cold shock proteins (CSPs) are nucleic acid binding chaperones, first described as being induced to solve the problem of mRNA stabilization after temperature downshift. Caulobacter crescentus has four CSPs: CspA and CspB, which are cold induced, and CspC and CspD, which are induced only in stationary phase. In this work we have determined that the synthesis of both CspA and CspB reaches the maximum levels early in the acclimation phase. The deletion of cspA causes a decrease in growth at low temperature, whereas the strain with a deletion of cspB has a very subtle and transient cold-related growth phenotype. The cspA cspB double mutant has a slightly more severe phenotype than that of the cspA mutant, suggesting that although CspA may be more important to cold adaptation than CspB, both proteins have a role in this process. Gene expression analyses were carried out using cspA and cspB regulatory fusions to the lacZ reporter gene and showed that both genes are regulated at the transcriptional and posttranscriptional levels. Deletion mapping of the long 5'-untranslated region (5'-UTR) of each gene identified a common region important for cold induction, probably via translation enhancement. In contrast to what was reported for other bacteria, these cold shock genes have no regulatory regions downstream from ATG that are important for cold induction. This work shows that the importance of CspA and CspB to C. crescentus cold adaptation, mechanisms of regulation, and pattern of expression during the acclimation phase apparently differs in many aspects from what has been described so far for other bacteria.
Resumo:
[EN] The seminal work of Horn and Schunck [8] is the first variational method for optical flow estimation. It introduced a novel framework where the optical flow is computed as the solution of a minimization problem. From the assumption that pixel intensities do not change over time, the optical flow constraint equation is derived. This equation relates the optical flow with the derivatives of the image. There are infinitely many vector fields that satisfy the optical flow constraint, thus the problem is ill-posed. To overcome this problem, Horn and Schunck introduced an additional regularity condition that restricts the possible solutions. Their method minimizes both the optical flow constraint and the magnitude of the variations of the flow field, producing smooth vector fields. One of the limitations of this method is that, typically, it can only estimate small motions. In the presence of large displacements, this method fails when the gradient of the image is not smooth enough. In this work, we describe an implementation of the original Horn and Schunck method and also introduce a multi-scale strategy in order to deal with larger displacements. For this multi-scale strategy, we create a pyramidal structure of downsampled images and change the optical flow constraint equation with a nonlinear formulation. In order to tackle this nonlinear formula, we linearize it and solve the method iteratively in each scale. In this sense, there are two common approaches: one that computes the motion increment in the iterations, like in ; or the one we follow, that computes the full flow during the iterations, like in. The solutions are incrementally refined ower the scales. This pyramidal structure is a standard tool in many optical flow methods.
Resumo:
The inherent stochastic character of most of the physical quantities involved in engineering models has led to an always increasing interest for probabilistic analysis. Many approaches to stochastic analysis have been proposed. However, it is widely acknowledged that the only universal method available to solve accurately any kind of stochastic mechanics problem is Monte Carlo Simulation. One of the key parts in the implementation of this technique is the accurate and efficient generation of samples of the random processes and fields involved in the problem at hand. In the present thesis an original method for the simulation of homogeneous, multi-dimensional, multi-variate, non-Gaussian random fields is proposed. The algorithm has proved to be very accurate in matching both the target spectrum and the marginal probability. The computational efficiency and robustness are very good too, even when dealing with strongly non-Gaussian distributions. What is more, the resulting samples posses all the relevant, welldefined and desired properties of “translation fields”, including crossing rates and distributions of extremes. The topic of the second part of the thesis lies in the field of non-destructive parametric structural identification. Its objective is to evaluate the mechanical characteristics of constituent bars in existing truss structures, using static loads and strain measurements. In the cases of missing data and of damages that interest only a small portion of the bar, Genetic Algorithm have proved to be an effective tool to solve the problem.
Resumo:
Máster Universitario en Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)
Resumo:
The hierarchical organisation of biological systems plays a crucial role in the pattern formation of gene expression resulting from the morphogenetic processes, where autonomous internal dynamics of cells, as well as cell-to-cell interactions through membranes, are responsible for the emergent peculiar structures of the individual phenotype. Being able to reproduce the systems dynamics at different levels of such a hierarchy might be very useful for studying such a complex phenomenon of self-organisation. The idea is to model the phenomenon in terms of a large and dynamic network of compartments, where the interplay between inter-compartment and intra-compartment events determines the emergent behaviour resulting in the formation of spatial patterns. According to these premises the thesis proposes a review of the different approaches already developed in modelling developmental biology problems, as well as the main models and infrastructures available in literature for modelling biological systems, analysing their capabilities in tackling multi-compartment / multi-level models. The thesis then introduces a practical framework, MS-BioNET, for modelling and simulating these scenarios exploiting the potential of multi-level dynamics. This is based on (i) a computational model featuring networks of compartments and an enhanced model of chemical reaction addressing molecule transfer, (ii) a logic-oriented language to flexibly specify complex simulation scenarios, and (iii) a simulation engine based on the many-species/many-channels optimised version of Gillespie’s direct method. The thesis finally proposes the adoption of the agent-based model as an approach capable of capture multi-level dynamics. To overcome the problem of parameter tuning in the model, the simulators are supplied with a module for parameter optimisation. The task is defined as an optimisation problem over the parameter space in which the objective function to be minimised is the distance between the output of the simulator and a target one. The problem is tackled with a metaheuristic algorithm. As an example of application of the MS-BioNET framework and of the agent-based model, a model of the first stages of Drosophila Melanogaster development is realised. The model goal is to generate the early spatial pattern of gap gene expression. The correctness of the models is shown comparing the simulation results with real data of gene expression with spatial and temporal resolution, acquired in free on-line sources.
Resumo:
Synthetic biology has recently had a great development, many papers have been published and many applications have been presented, spanning from the production of biopharmacheuticals to the synthesis of bioenergetic substrates or industrial catalysts. But, despite these advances, most of the applications are quite simple and don’t fully exploit the potential of this discipline. This limitation in complexity has many causes, like the incomplete characterization of some components, or the intrinsic variability of the biological systems, but one of the most important reasons is the incapability of the cell to sustain the additional metabolic burden introduced by a complex circuit. The objective of the project, of which this work is part, is trying to solve this problem through the engineering of a multicellular behaviour in prokaryotic cells. This system will introduce a cooperative behaviour that will allow to implement complex functionalities, that can’t be obtained with a single cell. In particular the goal is to implement the Leader Election, this procedure has been firstly devised in the field of distributed computing, to identify the process that allow to identify a single process as organizer and coordinator of a series of tasks assigned to the whole population. The election of the Leader greatly simplifies the computation providing a centralized control. Further- more this system may even be useful to evolutionary studies that aims to explain how complex organisms evolved from unicellular systems. The work presented here describes, in particular, the design and the experimental characterization of a component of the circuit that solves the Leader Election problem. This module, composed of an hybrid promoter and a gene, is activated in the non-leader cells after receiving the signal that a leader is present in the colony. The most important element, in this case, is the hybrid promoter, it has been realized in different versions, applying the heuristic rules stated in [22], and their activity has been experimentally tested. The objective of the experimental characterization was to test the response of the genetic circuit to the introduction, in the cellular environment, of particular molecules, inducers, that can be considered inputs of the system. The desired behaviour is similar to the one of a logic AND gate in which the exit, represented by the luminous signal produced by a fluorescent protein, is one only in presence of both inducers. The robustness and the stability of this behaviour have been tested by changing the concentration of the input signals and building dose response curves. From these data it is possible to conclude that the analysed constructs have an AND-like behaviour over a wide range of inducers’ concentrations, even if it is possible to identify many differences in the expression profiles of the different constructs. This variability accounts for the fact that the input and the output signals are continuous, and so their binary representation isn’t able to capture the complexity of the behaviour. The module of the circuit that has been considered in this analysis has a fundamental role in the realization of the intercellular communication system that is necessary for the cooperative behaviour to take place. For this reason, the second phase of the characterization has been focused on the analysis of the signal transmission. In particular, the interaction between this element and the one that is responsible for emitting the chemical signal has been tested. The desired behaviour is still similar to a logic AND, since, even in this case, the exit signal is determined by the hybrid promoter activity. The experimental results have demonstrated that the systems behave correctly, even if there is still a substantial variability between them. The dose response curves highlighted that stricter constrains on the inducers concentrations need to be imposed in order to obtain a clear separation between the two levels of expression. In the conclusive chapter the DNA sequences of the hybrid promoters are analysed, trying to identify the regulatory elements that are most important for the determination of the gene expression. Given the available data it wasn’t possible to draw definitive conclusions. In the end, few considerations on promoter engineering and complex circuits realization are presented. This section aims to briefly recall some of the problems outlined in the introduction and provide a few possible solutions.
Resumo:
The purpose of this doctoral thesis is to prove existence for a mutually catalytic random walk with infinite branching rate on countably many sites. The process is defined as a weak limit of an approximating family of processes. An approximating process is constructed by adding jumps to a deterministic migration on an equidistant time grid. As law of jumps we need to choose the invariant probability measure of the mutually catalytic random walk with a finite branching rate in the recurrent regime. This model was introduced by Dawson and Perkins (1998) and this thesis relies heavily on their work. Due to the properties of this invariant distribution, which is in fact the exit distribution of planar Brownian motion from the first quadrant, it is possible to establish a martingale problem for the weak limit of any convergent sequence of approximating processes. We can prove a duality relation for the solution to the mentioned martingale problem, which goes back to Mytnik (1996) in the case of finite rate branching, and this duality gives rise to weak uniqueness for the solution to the martingale problem. Using standard arguments we can show that this solution is in fact a Feller process and it has the strong Markov property. For the case of only one site we prove that the model we have constructed is the limit of finite rate mutually catalytic branching processes as the branching rate approaches infinity. Therefore, it seems naturalto refer to the above model as an infinite rate branching process. However, a result for convergence on infinitely many sites remains open.
Resumo:
Startups’ contributions on economic growth have been widely realized. However, the funding gap is often a problem limiting startups’ development. To some extent, VC can be a means to solve this problem. VC is one of the optimal financial intermediaries for startups. Two streams of VC studies are focused in this dissertation: the criteria used by venture capitalists to evaluate startups and the effect of VC on innovation. First, although many criteria have been analyzed, the empirical assessment of the effect of startup reputation on VC funding has not been investigated. However, reputation is usually positively related with firm performance, which may affect VC funding. By analyzing reputation from the generalized visibility dimension and the generalized favorability dimension using a sample of 200 startups founded from 1995 operating in the UK MNT sector, we show that both the two dimensions of reputation have positive influence on the likelihood of receiving VC funding. We also find that management team heterogeneity positively influence the likelihood of receiving VC funding. Second, studies investigating the effect of venture capital on innovation have frequently resorted to patent data. However, innovation is a process leading from invention to successful commercialization, and while patents capture the upstream side of innovative performance, they poorly describe its downstream one. By reflecting the introduction of new products or services trademarks can complete the picture, but empirical studies on trademarking in startups are rare. Analyzing a sample of 192 startups founded from 1996 operating in the UK MNT sector, we find that VC funding has positive effect on the propensity to register trademarks, as well as on the number and breadth of trademarks.
Resumo:
In many application domains data can be naturally represented as graphs. When the application of analytical solutions for a given problem is unfeasible, machine learning techniques could be a viable way to solve the problem. Classical machine learning techniques are defined for data represented in a vectorial form. Recently some of them have been extended to deal directly with structured data. Among those techniques, kernel methods have shown promising results both from the computational complexity and the predictive performance point of view. Kernel methods allow to avoid an explicit mapping in a vectorial form relying on kernel functions, which informally are functions calculating a similarity measure between two entities. However, the definition of good kernels for graphs is a challenging problem because of the difficulty to find a good tradeoff between computational complexity and expressiveness. Another problem we face is learning on data streams, where a potentially unbounded sequence of data is generated by some sources. There are three main contributions in this thesis. The first contribution is the definition of a new family of kernels for graphs based on Directed Acyclic Graphs (DAGs). We analyzed two kernels from this family, achieving state-of-the-art results from both the computational and the classification point of view on real-world datasets. The second contribution consists in making the application of learning algorithms for streams of graphs feasible. Moreover,we defined a principled way for the memory management. The third contribution is the application of machine learning techniques for structured data to non-coding RNA function prediction. In this setting, the secondary structure is thought to carry relevant information. However, existing methods considering the secondary structure have prohibitively high computational complexity. We propose to apply kernel methods on this domain, obtaining state-of-the-art results.
Resumo:
Questa ricerca indaga come il “caso Ustica” si è articolato nell’opinione pubblica italiana negli anni compresi tra il 1980 e il 1992. Con l'espressione “caso Ustica” ci si riferisce al problema politico determinato dalle vicende legate all’abbattimento dell’aereo civile DC-9 dell’Itavia, avvenuto il 27 giugno 1980 in circostanze che, come noto, furono chiarite solamente a distanza di molti anni dal fatto. L’analisi intende cogliere le specificità del processo che ha portato la vicenda di Ustica ad acquisire rilevanza politica nell’ambito della sfera pubblica italiana, in particolare prendendo in considerazione il ruolo svolto dall’opinione pubblica in un decennio, quale quello degli anni ’80 e dei primi anni ’90 italiani, caratterizzato da una nuova centralità dei media rispetto alla sfera politica. Attraverso l’analisi di un’ampia selezione di fonti a stampa (circa 1500 articoli dei principali quotidiani italiani e circa 700 articoli tratti dagli organi dei partiti politici italiani) si sono pertanto messe in luce le dinamiche mediatiche e politiche che hanno portato alla tematizzazione di una vicenda che era rimasta fino al 1986 totalmente assente dall’agenda politica nazionale. L’analisi delle fonti giudiziarie ha permesso inoltre di verificare come la politicizzazione del caso Ustica, costruita intorno alla tensione opacità/trasparenza del potere politico e all’efficace quanto banalizzante paradigma delle “stragi di Stato”, sia risultata funzionale al raggiungimento, dopo il 1990, dei primi elementi di verità sulla tragedia e all’ampiamento del caso a una dimensione internazionale.
Resumo:
Das Institut für Kernphysik der Universität Mainz betreibt seit 1990 eine weltweit einzigartige Beschleunigeranlage für kern- und teilchenphysikalische Experimente – das Mainzer Mikrotron (MAMI-B). Diese Beschleunigerkaskade besteht aus drei Rennbahn-Mikrotrons (RTMs) mit Hochfrequenzlinearbeschleunigern bei 2.45 GHz, mit denen ein quasi kontinuierlicher Elektronenstrahl von bis zu 100 μA auf 855MeV beschleunigt werden kann.rnrnIm Jahr 1999 wurde die Umsetzung der letzten Ausbaustufe – ein Harmonisches Doppelseitiges Mikrotron (HDSM, MAMI-C) – mit einer Endenergie von 1.5 GeV begonnen. Die Planung erforderte einige mutige Schritte, z.B. Umlenkmagnete mit Feldgradient und ihren daraus resultierenden strahloptischen Eigenschaften, die einen großen Einfluss auf die Longitudinaldynamik des Beschleunigers haben. Dies erforderte die Einführung der „harmonischen“ Betriebsweise mit zwei Frequenzen der zwei Linearbeschleuniger.rnrnViele Maschinenparameter (wie z.B. HF-Amplituden oder -Phasen) wirken direkt auf den Beschleunigungsprozess ein, ihre physikalischen Größen sind indes nicht immer auf einfache Weise messtechnisch zugänglich. Bei einem RTM mit einer verhältnismäßig einfachen und wohldefinierten Strahldynamik ist das im Routinebetrieb unproblematisch, beim HDSM hingegen ist schon allein wegen der größeren Zahl an Parametern die Kenntnis der physikalischen Größen von deutlich größerer Bedeutung. Es gelang im Rahmen dieser Arbeit, geeignete Methoden der Strahldiagnose zu entwickeln, mit denen diese Maschinenparameter überprüft und mit den Planungsvorgaben verglichen werden können.rnrnDa die Anpassung des Maschinenmodells an eine einzelne Phasenmessung aufgrund der unvermeidlichen Messfehler nicht immer eindeutige Ergebnisse liefert, wird eine Form der Tomographie verwendet. Der longitudinale Phasenraum wird dann in Form einer Akzeptanzmessung untersucht. Anschließend kann ein erweitertes Modell an die gewonnene Datenvielfalt angepasst werden, wodurch eine größere Signifikanz der Modellparameter erreicht wird.rnrnDie Ergebnisse dieser Untersuchungen zeigen, dass sich der Beschleuniger als Gesamtsystem im Wesentlichen wie vorhergesagt verhält und eine große Zahl unterschiedlicher Konfigurationen zum Strahlbetrieb möglich sind – im Routinebetrieb wird dies jedoch vermieden und eine bewährte Konfiguration für die meisten Situationen eingesetzt. Das führt zu einer guten Reproduzierbarkeit z.B. der Endenergie oder des Spinpolarisationswinkels an den Experimentierplätzen.rnrnDie Erkenntnisse aus diesen Untersuchungen wurden teilweise automatisiert, so dass nun den Operateuren zusätzliche und hilfreiche Diagnose zur Verfügung steht, mit denen der Maschinenbetrieb noch zuverlässiger durchgeführt werden kann.
Resumo:
In recent years, environmental concerns and the expected shortage in the fossil reserves have increased further development of biomaterials. Among them, poly(lactide) PLA possess some potential properties such as good ability process, excellent tensile strength and stiffness equivalent to some commercial petroleum-based polymers (PP, PS, PET, etc.). This biobased polymer is also biodegradable and biocompatible However, one great disadvantage of commercial PLA is slow crystallization rate, which restricts its use in many fields. Using of nanofillers is viewed as an efficient strategy to overcome this problem. In this thesis, the effect of bionanofillers in neat PLA and in blends of poly (L-lactide)(PLA)/poly(ε-Caprolactone) (PCL) has been investigated. The used nanofillers are: poly(L-lactide-co-ε-caprolactone) and poly(L-lactide-b-ε-caprolactone) grafted on cellulose nanowhiskers and neat cellulose nanowhiskers (CNW). The grafting reaction of poly(L-lactide-co-caprolactone) and poly (L-lactide-b-caprolactone) on the nanocellulose has been performed by the grafting from technique. In this way the polymerization reaction it is directly initiated on the substrate surface. The condition of the reaction were chosen after a temperature and solvent screening. By non-isothermal an isothermal DSC analysis the effect of bionanofillers on PLA and 80/20 PLA/PCL was evaluated. Non-isothermal DSC scans show a nucleating effect of the bionanofillers on PLA. This effect is detectable during PLA crystallization from the glassy state. Cold crystallization temperature is reduced upon the addition of the poly(L-lactide-b-caprolactone) grafted on cellulose nanowhiskers that is most performing bionanofiller in acting as a nucleating agent. On the other hand, DSC isothermal analysis on the overall crystallization rate indicate that cellulose nanowhiskers are best nucleating agents during isothermal crystallization from the melt state. In conclusion, nanofillers have different behavior depending on the processing conditions. However, the efficiency of our nanofillers as nucleating agent was clearly demonstrated in both isothermal as in non-isothermal condition.
Resumo:
We present a model for plasticity induction in reinforcement learning which is based on a cascade of synaptic memory traces. In the cascade of these so called eligibility traces presynaptic input is first corre lated with postsynaptic events, next with the behavioral decisions and finally with the external reinforcement. A population of leaky integrate and fire neurons endowed with this plasticity scheme is studied by simulation on different tasks. For operant co nditioning with delayed reinforcement, learning succeeds even when the delay is so large that the delivered reward reflects the appropriateness, not of the immediately preceeding response, but of a decision made earlier on in the stimulus - decision sequence . So the proposed model does not rely on the temporal contiguity between decision and pertinent reward and thus provides a viable means of addressing the temporal credit assignment problem. In the same task, learning speeds up with increasing population si ze, showing that the plasticity cascade simultaneously addresses the spatial problem of assigning credit to the different population neurons. Simulations on other task such as sequential decision making serve to highlight the robustness of the proposed sch eme and, further, contrast its performance to that of temporal difference based approaches to reinforcement learning.