894 resultados para Many-to-many-assignment problem


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this doctoral thesis is to prove existence for a mutually catalytic random walk with infinite branching rate on countably many sites. The process is defined as a weak limit of an approximating family of processes. An approximating process is constructed by adding jumps to a deterministic migration on an equidistant time grid. As law of jumps we need to choose the invariant probability measure of the mutually catalytic random walk with a finite branching rate in the recurrent regime. This model was introduced by Dawson and Perkins (1998) and this thesis relies heavily on their work. Due to the properties of this invariant distribution, which is in fact the exit distribution of planar Brownian motion from the first quadrant, it is possible to establish a martingale problem for the weak limit of any convergent sequence of approximating processes. We can prove a duality relation for the solution to the mentioned martingale problem, which goes back to Mytnik (1996) in the case of finite rate branching, and this duality gives rise to weak uniqueness for the solution to the martingale problem. Using standard arguments we can show that this solution is in fact a Feller process and it has the strong Markov property. For the case of only one site we prove that the model we have constructed is the limit of finite rate mutually catalytic branching processes as the branching rate approaches infinity. Therefore, it seems naturalto refer to the above model as an infinite rate branching process. However, a result for convergence on infinitely many sites remains open.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Startups’ contributions on economic growth have been widely realized. However, the funding gap is often a problem limiting startups’ development. To some extent, VC can be a means to solve this problem. VC is one of the optimal financial intermediaries for startups. Two streams of VC studies are focused in this dissertation: the criteria used by venture capitalists to evaluate startups and the effect of VC on innovation. First, although many criteria have been analyzed, the empirical assessment of the effect of startup reputation on VC funding has not been investigated. However, reputation is usually positively related with firm performance, which may affect VC funding. By analyzing reputation from the generalized visibility dimension and the generalized favorability dimension using a sample of 200 startups founded from 1995 operating in the UK MNT sector, we show that both the two dimensions of reputation have positive influence on the likelihood of receiving VC funding. We also find that management team heterogeneity positively influence the likelihood of receiving VC funding. Second, studies investigating the effect of venture capital on innovation have frequently resorted to patent data. However, innovation is a process leading from invention to successful commercialization, and while patents capture the upstream side of innovative performance, they poorly describe its downstream one. By reflecting the introduction of new products or services trademarks can complete the picture, but empirical studies on trademarking in startups are rare. Analyzing a sample of 192 startups founded from 1996 operating in the UK MNT sector, we find that VC funding has positive effect on the propensity to register trademarks, as well as on the number and breadth of trademarks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In many application domains data can be naturally represented as graphs. When the application of analytical solutions for a given problem is unfeasible, machine learning techniques could be a viable way to solve the problem. Classical machine learning techniques are defined for data represented in a vectorial form. Recently some of them have been extended to deal directly with structured data. Among those techniques, kernel methods have shown promising results both from the computational complexity and the predictive performance point of view. Kernel methods allow to avoid an explicit mapping in a vectorial form relying on kernel functions, which informally are functions calculating a similarity measure between two entities. However, the definition of good kernels for graphs is a challenging problem because of the difficulty to find a good tradeoff between computational complexity and expressiveness. Another problem we face is learning on data streams, where a potentially unbounded sequence of data is generated by some sources. There are three main contributions in this thesis. The first contribution is the definition of a new family of kernels for graphs based on Directed Acyclic Graphs (DAGs). We analyzed two kernels from this family, achieving state-of-the-art results from both the computational and the classification point of view on real-world datasets. The second contribution consists in making the application of learning algorithms for streams of graphs feasible. Moreover,we defined a principled way for the memory management. The third contribution is the application of machine learning techniques for structured data to non-coding RNA function prediction. In this setting, the secondary structure is thought to carry relevant information. However, existing methods considering the secondary structure have prohibitively high computational complexity. We propose to apply kernel methods on this domain, obtaining state-of-the-art results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Questa ricerca indaga come il “caso Ustica” si è articolato nell’opinione pubblica italiana negli anni compresi tra il 1980 e il 1992. Con l'espressione “caso Ustica” ci si riferisce al problema politico determinato dalle vicende legate all’abbattimento dell’aereo civile DC-9 dell’Itavia, avvenuto il 27 giugno 1980 in circostanze che, come noto, furono chiarite solamente a distanza di molti anni dal fatto. L’analisi intende cogliere le specificità del processo che ha portato la vicenda di Ustica ad acquisire rilevanza politica nell’ambito della sfera pubblica italiana, in particolare prendendo in considerazione il ruolo svolto dall’opinione pubblica in un decennio, quale quello degli anni ’80 e dei primi anni ’90 italiani, caratterizzato da una nuova centralità dei media rispetto alla sfera politica. Attraverso l’analisi di un’ampia selezione di fonti a stampa (circa 1500 articoli dei principali quotidiani italiani e circa 700 articoli tratti dagli organi dei partiti politici italiani) si sono pertanto messe in luce le dinamiche mediatiche e politiche che hanno portato alla tematizzazione di una vicenda che era rimasta fino al 1986 totalmente assente dall’agenda politica nazionale. L’analisi delle fonti giudiziarie ha permesso inoltre di verificare come la politicizzazione del caso Ustica, costruita intorno alla tensione opacità/trasparenza del potere politico e all’efficace quanto banalizzante paradigma delle “stragi di Stato”, sia risultata funzionale al raggiungimento, dopo il 1990, dei primi elementi di verità sulla tragedia e all’ampiamento del caso a una dimensione internazionale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Das Institut für Kernphysik der Universität Mainz betreibt seit 1990 eine weltweit einzigartige Beschleunigeranlage für kern- und teilchenphysikalische Experimente – das Mainzer Mikrotron (MAMI-B). Diese Beschleunigerkaskade besteht aus drei Rennbahn-Mikrotrons (RTMs) mit Hochfrequenzlinearbeschleunigern bei 2.45 GHz, mit denen ein quasi kontinuierlicher Elektronenstrahl von bis zu 100 μA auf 855MeV beschleunigt werden kann.rnrnIm Jahr 1999 wurde die Umsetzung der letzten Ausbaustufe – ein Harmonisches Doppelseitiges Mikrotron (HDSM, MAMI-C) – mit einer Endenergie von 1.5 GeV begonnen. Die Planung erforderte einige mutige Schritte, z.B. Umlenkmagnete mit Feldgradient und ihren daraus resultierenden strahloptischen Eigenschaften, die einen großen Einfluss auf die Longitudinaldynamik des Beschleunigers haben. Dies erforderte die Einführung der „harmonischen“ Betriebsweise mit zwei Frequenzen der zwei Linearbeschleuniger.rnrnViele Maschinenparameter (wie z.B. HF-Amplituden oder -Phasen) wirken direkt auf den Beschleunigungsprozess ein, ihre physikalischen Größen sind indes nicht immer auf einfache Weise messtechnisch zugänglich. Bei einem RTM mit einer verhältnismäßig einfachen und wohldefinierten Strahldynamik ist das im Routinebetrieb unproblematisch, beim HDSM hingegen ist schon allein wegen der größeren Zahl an Parametern die Kenntnis der physikalischen Größen von deutlich größerer Bedeutung. Es gelang im Rahmen dieser Arbeit, geeignete Methoden der Strahldiagnose zu entwickeln, mit denen diese Maschinenparameter überprüft und mit den Planungsvorgaben verglichen werden können.rnrnDa die Anpassung des Maschinenmodells an eine einzelne Phasenmessung aufgrund der unvermeidlichen Messfehler nicht immer eindeutige Ergebnisse liefert, wird eine Form der Tomographie verwendet. Der longitudinale Phasenraum wird dann in Form einer Akzeptanzmessung untersucht. Anschließend kann ein erweitertes Modell an die gewonnene Datenvielfalt angepasst werden, wodurch eine größere Signifikanz der Modellparameter erreicht wird.rnrnDie Ergebnisse dieser Untersuchungen zeigen, dass sich der Beschleuniger als Gesamtsystem im Wesentlichen wie vorhergesagt verhält und eine große Zahl unterschiedlicher Konfigurationen zum Strahlbetrieb möglich sind – im Routinebetrieb wird dies jedoch vermieden und eine bewährte Konfiguration für die meisten Situationen eingesetzt. Das führt zu einer guten Reproduzierbarkeit z.B. der Endenergie oder des Spinpolarisationswinkels an den Experimentierplätzen.rnrnDie Erkenntnisse aus diesen Untersuchungen wurden teilweise automatisiert, so dass nun den Operateuren zusätzliche und hilfreiche Diagnose zur Verfügung steht, mit denen der Maschinenbetrieb noch zuverlässiger durchgeführt werden kann.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, environmental concerns and the expected shortage in the fossil reserves have increased further development of biomaterials. Among them, poly(lactide) PLA possess some potential properties such as good ability process, excellent tensile strength and stiffness equivalent to some commercial petroleum-based polymers (PP, PS, PET, etc.). This biobased polymer is also biodegradable and biocompatible However, one great disadvantage of commercial PLA is slow crystallization rate, which restricts its use in many fields. Using of nanofillers is viewed as an efficient strategy to overcome this problem. In this thesis, the effect of bionanofillers in neat PLA and in blends of poly (L-lactide)(PLA)/poly(ε-Caprolactone) (PCL) has been investigated. The used nanofillers are: poly(L-lactide-co-ε-caprolactone) and poly(L-lactide-b-ε-caprolactone) grafted on cellulose nanowhiskers and neat cellulose nanowhiskers (CNW). The grafting reaction of poly(L-lactide-co-caprolactone) and poly (L-lactide-b-caprolactone) on the nanocellulose has been performed by the grafting from technique. In this way the polymerization reaction it is directly initiated on the substrate surface. The condition of the reaction were chosen after a temperature and solvent screening. By non-isothermal an isothermal DSC analysis the effect of bionanofillers on PLA and 80/20 PLA/PCL was evaluated. Non-isothermal DSC scans show a nucleating effect of the bionanofillers on PLA. This effect is detectable during PLA crystallization from the glassy state. Cold crystallization temperature is reduced upon the addition of the poly(L-lactide-b-caprolactone) grafted on cellulose nanowhiskers that is most performing bionanofiller in acting as a nucleating agent. On the other hand, DSC isothermal analysis on the overall crystallization rate indicate that cellulose nanowhiskers are best nucleating agents during isothermal crystallization from the melt state. In conclusion, nanofillers have different behavior depending on the processing conditions. However, the efficiency of our nanofillers as nucleating agent was clearly demonstrated in both isothermal as in non-isothermal condition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a model for plasticity induction in reinforcement learning which is based on a cascade of synaptic memory traces. In the cascade of these so called eligibility traces presynaptic input is first corre lated with postsynaptic events, next with the behavioral decisions and finally with the external reinforcement. A population of leaky integrate and fire neurons endowed with this plasticity scheme is studied by simulation on different tasks. For operant co nditioning with delayed reinforcement, learning succeeds even when the delay is so large that the delivered reward reflects the appropriateness, not of the immediately preceeding response, but of a decision made earlier on in the stimulus - decision sequence . So the proposed model does not rely on the temporal contiguity between decision and pertinent reward and thus provides a viable means of addressing the temporal credit assignment problem. In the same task, learning speeds up with increasing population si ze, showing that the plasticity cascade simultaneously addresses the spatial problem of assigning credit to the different population neurons. Simulations on other task such as sequential decision making serve to highlight the robustness of the proposed sch eme and, further, contrast its performance to that of temporal difference based approaches to reinforcement learning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Phosphatidylinositol-specific phospholipases C (PI-PLC) are known to participate in many eukaryotic signal transduction pathways and act as virulence factors in lower organisms. Glycerophosphoryl diester phosphodiesterase (GDPD) enzymes are involved in phosphate homeostasis and phospholipid catabolism for energy production. Streptomyces antibioticus phosphatidylinositol-specific phospholipase C (SaPLC1) is a 38 kDa enzyme that displays characteristics of both enzyme superfamilies, representing an evolutionary link between these divergent enzyme classes. SaPLC1 also boasts a unique catalytic mechanism that involves a trans 1,6-cyclic inositol phosphate intermediate instead of the typical cis 1,2-cyclic inositol phosphate. The mechanism by which this occurs is still unclear. To attack this problem, we established a wide mutagenesis scan of the active site and measured activities of alanine mutants. A chemical rescue assay was developed to verify that the activity loss was due to the removal of the functional role of the mutated residue. 31P-NMR was employed in characterizing and quantifying intermediates in mutants that slowed the reaction sufficiently. We found that the H37A and H76A mutations support the hypothesis that these structurally conserved residues are also conserved in terms of their catalytic roles. H37 was found to be the general base (GB), while H76 plays the role of general acid (GA). K131 was identified as a semi-conserved key positive charge donor found at the entrance of the active site. By elucidating the SaPLC1 mechanism in relation to its active site architecture, we have increased our understanding of the structure-function relations that support catalysis in the PI-PLC/GDPD superfamily. These findings provide groundwork for in vivo studies of SaPLC1 function and its possible role in novel signaling or metabolism in Streptomyces.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since 1990, the issue of homelessness has become increasingly important in Hungary as a result of economic and structural changes. Various suggestions as to how the problem may be solved have always been preceded by the question "How many homeless people are there?" and there is still no official consensus as to the answer. Counting of the homeless is particularly difficult because of the bias in the initial sampling frame due to two factors that characterise this population: the definition of homelessness, and its 'hidden' nature. David aimed to estimate the size of the homeless population of Budapest by using two non-standard sampling methods: snowball sampling and the capture-recapture method. Her calculations are based on three data sets: one snowball data set and two independent list data sets. These estimators, supported by other statistical data, suggest that in 1999 there were about 8000-10000 homeless people in Budapest.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stochastic models for three-dimensional particles have many applications in applied sciences. Lévy–based particle models are a flexible approach to particle modelling. The structure of the random particles is given by a kernel smoothing of a Lévy basis. The models are easy to simulate but statistical inference procedures have not yet received much attention in the literature. The kernel is not always identifiable and we suggest one approach to remedy this problem. We propose a method to draw inference about the kernel from data often used in local stereology and study the performance of our approach in a simulation study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The COLOSS BEEBOOK is a practical manual compiling standard methods in all fields of research on the western honey bee, Apis mellifera. The COLOSS network was founded in 2008 as a consequence of the heavy and frequent losses of managed honey bee colonies experienced in many regions of the world (Neumann and Carreck, 2010). As many of the world’s honey bee research teams began to address the problem, it soon became obvious that a lack of standardized research methods was seriously hindering scientists’ ability to harmonize and compare the data on colony losses obtained internationally. In its second year of activity, during a COLOSS meeting held in Bern, Switzerland, the idea of a manual of standardized honey bee research methods emerged. The manual, to be called the COLOSS BEEBOOK, was inspired by publications with similar purposes for fruit fly research (Lindsley and Grell, 1968; Ashburner 1989; Roberts, 1998; Greenspan, 2004).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The COLOSS BEEBOOK is a practical manual compiling standard methods in all fields of research on the western honey bee, Apis mellifera. The COLOSS network was founded in 2008 as a consequence of the heavy and frequent losses of managed honey bee colonies experienced in many regions of the world (Neumann and Carreck, 2010). As many of the world’s honey bee research teams began to address the problem, it soon became obvious that a lack of standardized research methods was seriously hindering scientists’ ability to harmonize and compare the data on colony losses obtained internationally. In its second year of activity, during a COLOSS meeting held in Bern, Switzerland, the idea of a manual of standardized honey bee research methods emerged. The manual, to be called the COLOSS BEEBOOK, was inspired by publications with similar purposes for fruit fly research (Lindsley and Grell, 1968; Ashburner, 1989; Roberts, 1998; Greenspan, 2004).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Taking up the thesis of Dipesh Chakrabarty (2009) that human history (including cultural history) on the one hand and natural history on the other must be brought into conversation more than has been done so in the past, this presentation will focus more closely on the significance and the impact of global climatic conditions and pests on the negotiations that Australian Prime Minister William Morris Hughes carried on with the British government between March and November 1916. Whereas Australia had been able to sell most of its produce in 1914 and 1915 the situation looked more serious in 1916, not least due to the growing shortage in shipping. It was therefore imperative for the Australian government to find a way to solve this problem, not least because it wanted to keep up its own war effort at the pace it had been going so far. In this context intentions to make or press ahead with a contribution to a war perceived to be more total those of the past interacted with natural phenomena such as the declining harvest in many parts of the world in 1916 as a consequence of climatic conditions as well as pests in many parts of the world.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Service providers make use of cost-effective wireless solutions to identify, localize, and possibly track users using their carried MDs to support added services, such as geo-advertisement, security, and management. Indoor and outdoor hotspot areas play a significant role for such services. However, GPS does not work in many of these areas. To solve this problem, service providers leverage available indoor radio technologies, such as WiFi, GSM, and LTE, to identify and localize users. We focus our research on passive services provided by third parties, which are responsible for (i) data acquisition and (ii) processing, and network-based services, where (i) and (ii) are done inside the serving network. For better understanding of parameters that affect indoor localization, we investigate several factors that affect indoor signal propagation for both Bluetooth and WiFi technologies. For GSM-based passive services, we developed first a data acquisition module: a GSM receiver that can overhear GSM uplink messages transmitted by MDs while being invisible. A set of optimizations were made for the receiver components to support wideband capturing of the GSM spectrum while operating in real-time. Processing the wide-spectrum of the GSM is possible using a proposed distributed processing approach over an IP network. Then, to overcome the lack of information about tracked devices’ radio settings, we developed two novel localization algorithms that rely on proximity-based solutions to estimate in real environments devices’ locations. Given the challenging indoor environment on radio signals, such as NLOS reception and multipath propagation, we developed an original algorithm to detect and remove contaminated radio signals before being fed to the localization algorithm. To improve the localization algorithm, we extended our work with a hybrid based approach that uses both WiFi and GSM interfaces to localize users. For network-based services, we used a software implementation of a LTE base station to develop our algorithms, which characterize the indoor environment before applying the localization algorithm. Experiments were conducted without any special hardware, any prior knowledge of the indoor layout or any offline calibration of the system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Service providers make use of cost-effective wireless solutions to identify, localize, and possibly track users using their carried MDs to support added services, such as geo-advertisement, security, and management. Indoor and outdoor hotspot areas play a significant role for such services. However, GPS does not work in many of these areas. To solve this problem, service providers leverage available indoor radio technologies, such as WiFi, GSM, and LTE, to identify and localize users. We focus our research on passive services provided by third parties, which are responsible for (i) data acquisition and (ii) processing, and network-based services, where (i) and (ii) are done inside the serving network. For better understanding of parameters that affect indoor localization, we investigate several factors that affect indoor signal propagation for both Bluetooth and WiFi technologies. For GSM-based passive services, we developed first a data acquisition module: a GSM receiver that can overhear GSM uplink messages transmitted by MDs while being invisible. A set of optimizations were made for the receiver components to support wideband capturing of the GSM spectrum while operating in real-time. Processing the wide-spectrum of the GSM is possible using a proposed distributed processing approach over an IP network. Then, to overcome the lack of information about tracked devices’ radio settings, we developed two novel localization algorithms that rely on proximity-based solutions to estimate in real environments devices’ locations. Given the challenging indoor environment on radio signals, such as NLOS reception and multipath propagation, we developed an original algorithm to detect and remove contaminated radio signals before being fed to the localization algorithm. To improve the localization algorithm, we extended our work with a hybrid based approach that uses both WiFi and GSM interfaces to localize users. For network-based services, we used a software implementation of a LTE base station to develop our algorithms, which characterize the indoor environment before applying the localization algorithm. Experiments were conducted without any special hardware, any prior knowledge of the indoor layout or any offline calibration of the system.