466 resultados para Sat Khande
Resumo:
Programa de doctorado: Ingeniería de Telecomunicación Avanzada
Resumo:
The aim of our work was to study the molecular mechanisms involved in symptoms appearance of plants inoculated either with a virus or with a virus-satellite complex. In the first case, we tried to set up a reliable method for an early identification of PVYNTN strains present in Italy and causing potato tuber necrosis. This, to prevent their spread in the field and to avoid severe yield losses, especially in seed potato production. We tried to localize the particular genomic region responsible for tuber necrosis. To this purpose, we carried out RT-PCR experiments using various primer combinations, covering PVY genomic regions larger than those previously used by other authors. As the previous researchers, though, we were not able to differentiate all NTN from others PVY strains. This probably because of the frequent virus variability, due to both genomic mutations and possible recombination events among different strains. In the second case, we studied the influence of Y-sat (CaRNA5 satellite) on symptoms of CMV (Cucumber mosaic virus) in Nicotiana benthamiana plants: strong yellowing appearance instead of simple mosaic. Wang et al (2004), inoculating the same infectious complex on tobacco plants transformed with a viral suppressor of plant silencing (HC-PRO), did not experience the occurrence of yellowing anymore and, therefore, hypotesized that changes in symptoms were due to plant post transcriptional gene silencing (PTGS) mechanism. In our case, inoculation of N. benthamiana plants transformed with another PTGS viral suppressor (p19), and other plants defective for RNA polymerase 6 (involved in systemic silencing), still resulted in yellowing appearance. This, to our opinion, suggests that in our system another possible mechanism is involved.
Resumo:
La crittografia ha sempre rivestito un ruolo primario nella storia del genere umano, dagli albori ai giorni nostri, e il periodo in cui viviamo non fa certo eccezione. Al giorno d'oggi, molti dei gesti che vengono compiuti anche solo come abitudine (operazioni bancarie, apertura automatica dell'auto, accedere a Facebook, ecc.), celano al loro interno la costante presenza di sofisticati sistemi crittografici. Proprio a causa di questo fatto, è importante che gli algoritmi utilizzati siano in qualche modo certificati come ragionevolmente sicuri e che la ricerca in questo campo proceda costantemente, sia dal punto di vista dei possibili nuovi exploit per forzare gli algoritmi usati, sia introducendo nuovi e sempre più complessi sistemi di sicurezza. In questa tesi viene proposto una possibile implementazione di un particolare tipo di attacco crittoanalitico, introdotto nel 2000 da due ricercatori dell'Università "La Sapienza" di Roma, e conosciuto come "Crittoanalisi Logica". L'algoritmo su cui è incentrato il lavoro è il Data Encryption Standard (DES), ostico standard crittografico caduto in disuso nel 1999 a causa delle dimensioni ridotte della chiave, seppur tuttora sia algebricamente inviolato. Il testo è strutturato nel seguente modo: il primo capitolo è dedicato ad una breve descrizione di DES e della sua storia, introducendo i concetti fondamentali con cui si avrà a che fare per l'intera dissertazione Nel secondo capitolo viene introdotta la Crittoanalisi Logica e viene fornita una definizione della stessa, accennando ai concetti matematici necessari alla comprensione dei capitoli seguenti. Nel capitolo 3 viene presentato il primo dei due software sviluppati per rendere possibile l'attuazione di questo attacco crittoanalitico, una libreria per la rappresentazione e la manipolazione di formule logiche scritta in Java. Il quarto ed ultimo capitolo descrive il programma che, utilizzando la libreria descritta nel capitolo 3, elabora in maniera automatica un insieme di proposizioni logiche semanticamente equivalenti a DES, la cui verifica di soddisfacibilità, effettuata tramite appositi tools (SAT solvers) equivale ad effettuare un attacco di tipo known-plaintext su tale algoritmo.
Resumo:
La cippatura è un processo produttivo fondamentale nella trasformazione della materia prima forestale in biomassa combustibile che coinvolgerà un numero sempre più crescente di operatori. Scopo dello studio è stato quantificare la produttività e il consumo di combustibile in 16 cantieri di cippatura e determinare i livelli di esposizione alla polvere di legno degli addetti alla cippatura, in funzione di condizioni operative differenti. Sono state identificate due tipologie di cantiere: uno industriale, con cippatrici di grossa taglia (300-400kW) dotate di cabina, e uno semi-industriale con cippatrici di piccola-media taglia (100-150kW) prive di cabina. In tutti i cantieri sono stati misurati i tempi di lavoro, i consumi di combustibile, l’esposizione alla polvere di legno e sono stati raccolti dei campioni di cippato per l’analisi qualitativa. Il cantiere industriale ha raggiunto una produttività media oraria di 25 Mg tal quali, ed è risultato 5 volte più produttivo di quello semi-industriale, che ha raggiunto una produttività media oraria di 5 Mg. Ipotizzando un utilizzo massimo annuo di 1500 ore, il cantiere semi-industriale raggiunge una produzione annua di 7.410 Mg, mentre quello industriale di 37.605 Mg. Il consumo specifico di gasolio (L per Mg di cippato) è risultato molto minore per il cantiere industriale, che consuma in media quasi la metà di quello semi-industriale. Riguardo all’esposizione degli operatori alla polvere di legno, tutti i campioni hanno riportato valori di esposizione inferiori a 5 mg/m3 (limite di legge previsto dal D.Lgs. 81/08). Nei cantieri semi-industriali il valore medio di esposizione è risultato di 1,35 mg/m3, con un valore massimo di 3,66 mg/m3. Nei cantieri industriali si è riscontrato che la cabina riduce drasticamente l’esposizione alle polveri di legno. I valori medi misurati all’esterno della cabina sono stati di 0,90 mg/m3 mentre quelli all’interno della cabina sono risultati pari a 0,20 mg/m3.
Resumo:
In dieser Dissertation wurden die Daten von Patienten ausgewertet, die im Zeitraum vom 01. April 2004 bis zum 31. Mai 2005 an der Universitätsklinik Mainz eine Koronarintervention am Hauptstamm erhielten. Insgesamt wurde in dieser Zeit bei 73 Patienten (53 Männer und 20 Frauen) eine Hauptstammintervention durchgeführt. Das sind 6 % aller in diesem Zeitraum durchgeführten Interventionen. Es wurden sowohl Akutinterventionen als auch elektive Interventionen untersucht. Das Altersspektrum der Patienten reichte von 39- 87 Jahren. Die linksventrikuläre Ejektionsfraktion betrug im Mittel 55%. Es lag bei zwei Patienten eine 1- Gefäß-, bei 16 Patienten eine 2-Gefäß- und bei 55 Patienten eine 3-Gefäßerkrankung vor. Zehn Patienten hatten einen geschützten Hauptstamm. Bei 38 Patienten (52%) lag eine Hauptstammbifurkationsstenose vor. In der Regel bekamen alle Patienten ASS und Clopidogrel zu Weiterführung der Antikoagulation nach dem Krankenhausaufenthalt verordnet. Nur bei drei Patienten wurde von diesem Schema abgewichen, da sie aufgrund von mechanischen Herzklappenprothesen Marcumar erhielten. Bei 72 von 73 behandelten Patienten konnte die LCA-Stenose mittels der Hauptstammintervention auf einen Stenosegrad unter 30% reduziert werden. Die Intervention war also in 99% der Patienten primär erfolgreich. Ein Follow-up liegt von 69 der 73 Patienten vor. Bei 52 Patienten liegt eine Kontrollangiographie vor und bei 21 Patienten liegt keine vor (zehn verstorbene Patienten, sieben Patienten mit nicht invasiver Kontrolle, vier Patienten ohne Follow-up). Im Kontrollzeitraum wurde bei 38 Patienten (52% des Gesamtkollektivs) keine erneute Intervention notwendig, sie erlitten keine Komplikationen und zeigten ein gutes Langzeitergebnis. Bei 29 der 66 Patienten, die das Krankenhaus lebend verließen, traten Spätkomplikationen auf und/oder es wurde eine Reintervention am Zielgefäß oder Nichtzielgefäß notwendig. Der durchschnittliche Restenosegrad des Zielgefäßes bei den Patienten, die eine invasive Kontrolle hatten, belief sich auf 24%. Eine Rezidivstenose, definitionsgemäß eine Restenose >50%, lag bei elf Patienten vor. Zu den frühen Komplikationen, die während der Intervention oder des Krankenhausaufenthaltes auftraten, zählten sieben Todesfälle, eine SAT und zehn Blutungsereignisse. Zu den Komplikationen, die während der Langzeitbeobachtung auftraten, gehörten fünf weitere Todesfälle (vier nicht kardial bedingt, einer kardial bedingt), ein Apoplex, eine SAT, vier Bypass-Operationen, drei NSTEMI und vier instabile AP. Insgesamt traten an Komplikationen Tod (12 Patienten), Apoplex (1 Patient), SAT (2 Patienten), Bypass-Operationen (4 Patienten), NSTEMI (3 Patienten), Blutungen (10 Patienten) und instabile Angina pectoris (4 Patienten) auf. Eine Reintervention des Zielgefäßes wurde bei 19 % und eine des Nichtzielgefäßes bei 18 % der Patienten durchgeführt. Die Ergebnisse zeigen, dass der Primärerfolg der Hauptstammstentimplantation insbesondere bei elektiven Patienten, die eine gute Intermediärprognose haben, groß ist und die Intervention mit geringen Komplikationen verbunden ist.
Resumo:
Con il seguente elaborato si vuole evidenziare il percorso seguito per progettare e realizzare una macchina automatica adibita all’applicazione del sigillo di anti effrazione sulle diverse confezioni di farmaci presenti nel mercato farmaceutico. Obiettivo dunque del lavoro che viene presentato è quello di esplicitare e motivare le diverse scelte fatte in campo progettuale, grazie alle quali si è giunti alla realizzazione vera e propria della macchina in tutte le sue componenti e alla vendita di quest’ultima ad una casa farmaceutica del torinese. L’elaborato è così suddiviso: nella prima parte verrà descritta l’azienda demandante del progetto, la sua attività ed il suo campo di ricerca. Seguirà poi la descrizione dell’operazione per cui la macchina è stata concepita, i requisiti minimi di produttività che quest’ultima deve avere, e il campo di utilizzo. Nella seconda parte verranno presentati i vari gruppi che compongono la macchina, esplicitando la loro funzione, gli studi e le scelte fatte per la loro realizzazione, portando foto e disegni CAD dei componenti. Verranno poi descritti gli accorgimenti per la corretta installazione della macchina ed in fine verranno descritte le operazioni di collaudo effettuate sulla macchina, quali SAT (Site Acceptance Tests - Collaudo del sistema presso l’Utilizzatore) e FAT (Factory Acceptance Tests - Collaudo del sistema presso il costruttore)
Resumo:
Recent research has shown that the performance of a single, arbitrarily efficient algorithm can be significantly outperformed by using a portfolio of —possibly on-average slower— algorithms. Within the Constraint Programming (CP) context, a portfolio solver can be seen as a particular constraint solver that exploits the synergy between the constituent solvers of its portfolio for predicting which is (or which are) the best solver(s) to run for solving a new, unseen instance. In this thesis we examine the benefits of portfolio solvers in CP. Despite portfolio approaches have been extensively studied for Boolean Satisfiability (SAT) problems, in the more general CP field these techniques have been only marginally studied and used. We conducted this work through the investigation, the analysis and the construction of several portfolio approaches for solving both satisfaction and optimization problems. We focused in particular on sequential approaches, i.e., single-threaded portfolio solvers always running on the same core. We started from a first empirical evaluation on portfolio approaches for solving Constraint Satisfaction Problems (CSPs), and then we improved on it by introducing new data, solvers, features, algorithms, and tools. Afterwards, we addressed the more general Constraint Optimization Problems (COPs) by implementing and testing a number of models for dealing with COP portfolio solvers. Finally, we have come full circle by developing sunny-cp: a sequential CP portfolio solver that turned out to be competitive also in the MiniZinc Challenge, the reference competition for CP solvers.
Resumo:
Tesi in azienda che verte sugli impianti di ripartizione di farmaci in asepsi e atmosfera controllata. Sono stati analizzati: norme riguardanti progettazione e installazione di impianti farmaceutici; tecnologie di ripartizione (isolatori e RABS) con riferimento all'isolatore in azienda e all'impianto HVAC asservito; la linea di ripartizione in generale (washing machine - tunnel di depirogenazione - isolatore - coding station); test SAT di convalida per l'installazione di isolatore e impianto HVAC; sviluppo e convalida del ciclo di decontaminazione con VHP; conti di ottimizzazione dell'inertizzazione della camera di riempimento.
Resumo:
The purpose of this study was to assess the expression profile of genes with potential role in the development of insulin resistance (adipokines, cytokines/chemokines, estrogen receptors) in subcutaneous adipose tissue (SAT), visceral adipose tissue (VAT) and placenta of pregnant women with gestational diabetes mellitus (GDM) and age-matched women with physiological pregnancy at the time of Caesarean section. qRT-PCR was used for expression analysis of the studied genes. Leptin gene expression in VAT of GDM group was significantly higher relative to control group. Gene expressions of interleukin-6 and interleukin-8 were significantly increased, whereas the expressions of genes for estrogen receptors alpha and beta were significantly reduced in SAT of GDM group relative to controls, respectively. We found no significant differences in the expression of any genes of interest (LEP, RETN, ADIPOR1, ADIPOR2, TNF-alpha, CD68, IL-6, IL-8, ER alpha, ER beta) in placentas of women with GDM relative to controls. We conclude that increased expression of leptin in visceral adipose depot together with increased expressions of proinflammatory cytokines and reduced expressions of estrogen receptors in subcutaneous fat may play a role in the etiopathogenesis of GDM.
Resumo:
The variability of the Atlantic meridional overturing circulation (AMOC) strength is investigated in control experiments and in transient simulations of up to the last millennium using the low-resolution Community Climate System Model version 3. In the transient simulations the AMOC exhibits enhanced low-frequency variability that is mainly caused by infrequent transitions between two semi-stable circulation states which amount to a 10 percent change of the maximum overturning. One transition is also found in a control experiment, but the time-varying external forcing significantly increases the probability of the occurrence of such events though not having a direct, linear impact on the AMOC. The transition from a high to a low AMOC state starts with a reduction of the convection in the Labrador and Irminger Seas and goes along with a changed barotropic circulation of both gyres in the North Atlantic and a gradual strengthening of the convection in the Greenland-Iceland-Norwegian (GIN) Seas. In contrast, the transition from a weak to a strong overturning is induced by decreased mixing in the GIN Seas. As a consequence of the transition, regional sea surface temperature (SST) anomalies are found in the midlatitude North Atlantic and in the convection regions with an amplitude of up to 3 K. The atmospheric response to the SST forcing associated with the transition indicates a significant impact on the Scandinavian surface air temperature (SAT) in the order of 1 K. Thus, the changes of the ocean circulation make a major contribution to the Scandinavian SAT variability in the last millennium.
Resumo:
Meta-cognition, or "thinking about thinking," has been studied extensively in humans, but very little is known about the process in animals. Although great apes and rhesus macaques (Macaca mulatta) have demonstrated multiple apparently meta-cognitive abilities, other species have either been largely ignored or failed to convincingly display meta-cognitive traits. Recent work by Marsh, however, raised the possibility that some species may possess rudimentary or partial forms of meta-cognition. This thesis sought to further investigate this possibility by running multiple comparative experiments. The goal of the first study was to examine whether lion-tailed macaques, a species that may have a rudimentary form of meta-cognition, are able to use an uncertainty response adaptively, and if so, whether they could use the response flexibly when the stimuli for which the subjects should be uncertain changed. The macaques' acquisition of the initial discrimination task is ongoing, and as such there were not yet data to support a conclusion either way. In the second study, tufted capuchins were required to locate a food reward hidden beneath inverted cups that sat on a Plexiglas tray. In some conditions the capuchins were shown where the food was hidden, in others they could infer its location, and in yet others they were not given information about the location of the food. On all trials, however, capuchins could optionally seek additional information by looking up through the Plexiglas into the cups. In general, capuchins did this less often when they were shown the food reward, but not when they could infer the reward's location. These data suggest capuchins only meta-cognitively control their information seeking in some conditions, and thus, add support to the potential for a rudimentary form of meta-cognition. In convergence with other studies, these results may represent early models for rudimentary meta-cognition, although viable alternative explanations still remain.
Resumo:
Nitrogen (N) saturation is an environmental concern for forests in the eastern U.S. Although several watersheds of the Fernow Experimental Forest (FEF), West Virginia exhibit symptoms of Nsaturation, many watersheds display a high degree of spatial variability in soil N processing. This study examined the effects of temperature on net N mineralization and nitrification in N-saturatedsoils from FEF, and how these effects varied between high N-processing vs. low N-processingsoils collected from two watersheds, WS3 (fertilized with [NH4]2SO4) and WS4 (untreated control). Samples of forest floor material (O2 horizon) and mineral soil (to a 5-cm depth) were taken from three subplots within each of four plots that represented the extremes of highest and lowest ratesof net N mineralization and nitrification (hereafter, high N and low N, respectively) of untreated WS4 and N-treated WS3: control/low N, control/high N, N-treated/low N, N-treated/high N. Forest floor material was analyzed for carbon (C), lignin,and N. Subsamples of mineral soil were extractedimmediately with 1 N KCl and analyzed for NH4+and NO3– to determine preincubation levels. Extracts were also analyzed for Mg, Ca, Al, and pH. To test the hypothesis that the lack of net nitrification observed in field incubations on the untreated/low N plot was the result of absence ofnitrifier populations, we characterized the bacterial community involved in N cycling by amplification of amoA genes. Remaining soil was incubated for 28 d at three temperatures (10, 20, and30°C), followed by 1 N KCl extraction and analysis for NH4+ and NO3–. Net nitrification was essentially 100% of net N mineralization for all samples combined. Nitrification rates from lab incubation sat all temperatures supported earlier observations based on field incubations. At 30°C, rates from N- t reated/high N were three times those of N-treated/low N. Highest rates were found for untreated/high N (two times greater than those of N-treated/high N), whereas untreated/low N exhibited no net nitrification. However, soils exhibitingno net nitrification tested positive for presence of nitrifying bacteria, causing us to reject our initial hypothesis. We hypothesize that nitrifier populations in such soil are being inhibited by a combination of low Ca:Al ratios in mineral soil and allelopathic interactions with mycorrhizae of ericaceous species in the herbaceous layer.
Resumo:
Pat Williams emerged from the Mining City of Butte, Montana with a sense of grassroots, people-oriented politics. His inherent belief in the power of ordinary citizens carried him through the Montana Legislature and into Congress for a record-setting period. The accomplishments of his long career partially obscured his innate progressive and populist instinct that is reflective of the period of “in the Crucible of Change.” This film addresses Pat’s early years when his progressive instincts and activities resulted in pushback from the giant Anaconda Company which had held Montana hostage for 75 years. Pat is joined for part of the film by former campaign staffer, and now prominent media consultant, Michael Fenenbock for reflections on Pat’s 1978 “Door-to-Door to Congress” campaign, which demonstrated the power of his belief in the people on the other side of the doors. Pat Williams (b. 1937) rose from teaching grade school in his hometown of Butte, MT, to serving for the longest number of consecutive terms (9 terms, 18 years) in the US House of Representatives of anyone in Montana history. Pat was a member of the National Guard and attended UM in Missoula and William Jewel College, graduating from the University of Denver. Pat also served in the Montana legislature for 2 terms (1966 & 1968 elections). In 1969. Pat helped his legislative seat-mate John Melcher get elected as Montana’s Eastern District Congressman in the Special Election that June. Pat went to Washington DC as Melcher’s Executive Assistant. Upon returning to Montana, Pat headed up the Montana offices of the innovative Mountain Plains Family Education Program. In 1974, Pat ran unsuccessfully for Montana’s Western District Congressional seat in a three-way race with former Congressman Arnold Olsen and state Legislator Max Baucus. After the drafting and passage of the 1972 Montana Constitution, Pat was named a member of Montana’s first-ever Reapportionment Commission. In 1978 he successfully ran for Congress, conducting a massive grass-roots door-to-door campaign of 1½ years, reaching 50,000 doors. In a hotly contested 6-way Democratic primary, Pat won going away and also handily won the general election. Pat served in Congress from January, 1979 until January of 1997, 14 years representing the Western District and 4 years representing the entire state. Upon his retirement from Congress, in 1997 Williams returned to Montana where has been an instructor at the University of Montana and Senior Fellow and Regional Policy Associate at the Center for the Rocky Mountain West. He is a former member of the Montana Board of Regents and serves on a number of national education-related boards. In Congress Pat was a Deputy Whip of the U.S. House of Representatives and sat on committees on: Budget, Natural Resources, Education and Labor, and Agriculture. Pat’s leadership helped pass trailblazing legislation to assist hard-working middle-class families and ensure opportunities for every child. Pat’s fingerprints are on many pieces of important legislation, including the College Middle Income Assistance Act, the Family and Medical Leave Act, the Toddlers and Childhood Disability Act, the Library Services and Construction Act, and the Museum Services Act. Pat successfully sponsored the Lee Metcalf Wilderness Area and the Rattlesnake Wilderness area, helped save the Bob Marshall Wilderness from oil and gas exploration, and helped ban geothermal energy drilling near the borders of Yellowstone National Park. As Chairman of The Post-Secondary Education Committee, he protected the National Endowment for the Arts from elimination, a remarkable undertaking during a very trying time for the Agency. Pat worked tirelessly with Tribal College Leaders to build Montana’s seven Tribal Colleges. He was also responsible for the legislation that created The American Conservation Corps, which became the Corporation for National Service, giving thousands of America’s young people a chance to serve their country and pursue higher education. Pat lives in Missoula with his wife Carol Griffith Williams, former Montana Senate Majority Leader. They have three children and five grandchildren.
Resumo:
We present the results of an investigation into the nature of the information needs of software developers who work in projects that are part of larger ecosystems. In an open- question survey we asked framework and library developers about their information needs with respect to both their upstream and downstream projects. We investigated what kind of information is required, why is it necessary, and how the developers obtain this information. The results show that the downstream needs are grouped into three categories roughly corresponding to the different stages in their relation with an upstream: selection, adop- tion, and co-evolution. The less numerous upstream needs are grouped into two categories: project statistics and code usage. The current practices part of the study shows that to sat- isfy many of these needs developers use non-specific tools and ad hoc methods. We believe that this is a largely unexplored area of research.
Resumo:
The influence of a reduced Greenland Ice Sheet (GrIS) on Greenland's surface climate during the Eemian interglacial is studied using a set of simulations with different GrIS realizations performed with a comprehensive climate model. We find a distinct impact of changes in the GrIS topography on Greenland's surface air temperatures (SAT) even when correcting for changes in surface elevation, which influences SAT through the lapse rate effect. The resulting lapse-rate-corrected SAT anomalies are thermodynamically driven by changes in the local surface energy balance rather than dynamically caused through anomalous advection of warm/cold air masses. The large-scale circulation is indeed very stable among all sensitivity experiments and the Northern Hemisphere (NH) flow pattern does not depend on Greenland's topography in the Eemian. In contrast, Greenland's surface energy balance is clearly influenced by changes in the GrIS topography and this impact is seasonally diverse. In winter, the variable reacting strongest to changes in the topography is the sensible heat flux (SHF). The reason is its dependence on surface winds, which themselves are controlled to a large extent by the shape of the GrIS. Hence, regions where a receding GrIS causes higher surface wind velocities also experience anomalous warming through SHF. Vice-versa, regions that become flat and ice-free are characterized by low wind speeds, low SHF, and anomalous low winter temperatures. In summer, we find surface warming induced by a decrease in surface albedo in deglaciated areas and regions which experience surface melting. The Eemian temperature records derived from Greenland proxies, thus, likely include a temperature signal arising from changes in the GrIS topography. For the Eemian ice found in the NEEM core, our model suggests that up to 3.1 °C of the annual mean Eemian warming can be attributed to these topography-related processes and hence is not necessarily linked to large-scale climate variations.