904 resultados para encoding of measurement streams
Resumo:
Das Standardmodell der Teilchenphysik, das drei der vier fundamentalen Wechselwirkungen beschreibt, stimmt bisher sehr gut mit den Messergebnissen der Experimente am CERN, dem Fermilab und anderen Forschungseinrichtungen überein. rnAllerdings können im Rahmen dieses Modells nicht alle Fragen der Teilchenphysik beantwortet werden. So lässt sich z.B. die vierte fundamentale Kraft, die Gravitation, nicht in das Standardmodell einbauen.rnDarüber hinaus hat das Standardmodell auch keinen Kandidaten für dunkle Materie, die nach kosmologischen Messungen etwa 25 % unseres Universum ausmacht.rnAls eine der vielversprechendsten Lösungen für diese offenen Fragen wird die Supersymmetrie angesehen, die eine Symmetrie zwischen Fermionen und Bosonen einführt. rnAus diesem Modell ergeben sich sogenannte supersymmetrische Teilchen, denen jeweils ein Standardmodell-Teilchen als Partner zugeordnet sind.rnEin mögliches Modell dieser Symmetrie ist das R-Paritätserhaltende mSUGRA-Modell, falls Supersymmetrie in der Natur realisiert ist.rnIn diesem Modell ist das leichteste supersymmetrische Teilchen (LSP) neutral und schwach wechselwirkend, sodass es nicht direkt im Detektor nachgewiesen werden kann, sondern indirekt über die vom LSP fortgetragene Energie, die fehlende transversale Energie (etmiss), nachgewiesen werden muss.rnrnDas ATLAS-Experiment wird 2010 mit Hilfe des pp-Beschleunigers LHC mit einer Schwerpunktenergie von sqrt(s)=7-10 TeV mit einer Luminosität von 10^32 #/(cm^2*s) mit der Suche nach neuer Physik starten.rnDurch die sehr hohe Datenrate, resultierend aus den etwa 10^8 Auslesekanälen des ATLAS-Detektors bei einer Bunchcrossingrate von 40 MHz, wird ein Triggersystem benötigt, um die zu speichernde Datenmenge zu reduzieren.rnDabei muss ein Kompromiss zwischen der verfügbaren Triggerrate und einer sehr hohen Triggereffizienz für die interessanten Ereignisse geschlossen werden, da etwa nur jedes 10^8-te Ereignisse für die Suche nach neuer Physik interessant ist.rnZur Erfüllung der Anforderungen an das Triggersystem wird im Experiment ein dreistufiges System verwendet, bei dem auf der ersten Triggerstufe mit Abstand die höchste Datenreduktion stattfindet.rnrnIm Rahmen dieser Arbeit rn%, die vollständig auf Monte-Carlo-Simulationen basiert, rnist zum einen ein wesentlicher Beitrag zum grundlegenden Verständnis der Eigenschaft der fehlenden transversalen Energie auf der ersten Triggerstufe geleistet worden.rnZum anderen werden Methoden vorgestellt, mit denen es möglich ist, die etmiss-Triggereffizienz für Standardmodellprozesse und mögliche mSUGRA-Szenarien aus Daten zu bestimmen. rnBei der Optimierung der etmiss-Triggerschwellen für die erste Triggerstufe ist die Triggerrate bei einer Luminosität von 10^33 #/(cm^2*s) auf 100 Hz festgelegt worden.rnFür die Triggeroptimierung wurden verschiedene Simulationen benötigt, bei denen eigene Entwicklungsarbeit eingeflossen ist.rnMit Hilfe dieser Simulationen und den entwickelten Optimierungsalgorithmen wird gezeigt, dass trotz der niedrigen Triggerrate das Entdeckungspotential (für eine Signalsignifikanz von mindestens 5 sigma) durch Kombinationen der etmiss-Schwelle mit Lepton bzw. Jet-Triggerschwellen gegenüber dem bestehenden ATLAS-Triggermenü auf der ersten Triggerstufe um bis zu 66 % erhöht wird.
Resumo:
In seguito ad una disamina del materiale presente in letteratura, ci siamo chiesti se i numerosi investimenti pubblicitari, promozionali, di marketing e, per dirla in una parola sola “intangibili”, generassero un aumento del valore dell’impresa nel contesto valutativo oppure se dessero origine esclusivamente ad aumenti di fatturato. L’obiettivo più ambito consiste nel capitalizzare gli investimenti su attività intangibili come la costruzione del marchio, l’utilizzo di brevetti, le operazioni rivolte alla soddisfazione del cliente e tutto quanto si possa definire immateriale. Eppure coesistono nel mare magnum della stessa azienda. Fino a quando non si potrà inserire criteri di valutazione d’azienda delle performance di marketing non ci potrà essere crescita in quanto, le risorse, sono utilizzate senza un criterio di ritorno di investimento.
Resumo:
In this report a new automated optical test for next generation of photonic integrated circuits (PICs) is provided by the test-bed design and assessment. After a briefly analysis of critical problems of actual optical tests, the main test features are defined: automation and flexibility, relaxed alignment procedure, speed up of entire test and data reliability. After studying varied solutions, the test-bed components are defined to be lens array, photo-detector array, and software controller. Each device is studied and calibrated, the spatial resolution, and reliability against interference at the photo-detector array are studied. The software is programmed in order to manage both PIC input, and photo-detector array output as well as data analysis. The test is validated by analysing state-of-art 16 ports PIC: the waveguide location, current versus power, and time-spatial power distribution are measured as well as the optical continuity of an entire path of PIC. Complexity, alignment tolerance, time of measurement are also discussed.
Resumo:
Modern software systems, in particular distributed ones, are everywhere around us and are at the basis of our everyday activities. Hence, guaranteeing their cor- rectness, consistency and safety is of paramount importance. Their complexity makes the verification of such properties a very challenging task. It is natural to expect that these systems are reliable and above all usable. i) In order to be reliable, compositional models of software systems need to account for consistent dynamic reconfiguration, i.e., changing at runtime the communication patterns of a program. ii) In order to be useful, compositional models of software systems need to account for interaction, which can be seen as communication patterns among components which collaborate together to achieve a common task. The aim of the Ph.D. was to develop powerful techniques based on formal methods for the verification of correctness, consistency and safety properties related to dynamic reconfiguration and communication in complex distributed systems. In particular, static analysis techniques based on types and type systems appeared to be an adequate methodology, considering their success in guaranteeing not only basic safety properties, but also more sophisticated ones like, deadlock or livelock freedom in a concurrent setting. The main contributions of this dissertation are twofold. i) On the components side: we design types and a type system for a concurrent object-oriented calculus to statically ensure consistency of dynamic reconfigurations related to modifications of communication patterns in a program during execution time. ii) On the communication side: we study advanced safety properties related to communication in complex distributed systems like deadlock-freedom, livelock- freedom and progress. Most importantly, we exploit an encoding of types and terms of a typical distributed language, session π-calculus, into the standard typed π- calculus, in order to understand their expressive power.
Resumo:
The present work takes into account three posterior parietal areas, V6, V6A, and PEc, all operating on different subsets of signals (visual, somatic, motor). The work focuses on the study of their functional properties, to better understand their respective contribution in the neuronal circuits that make possible the interactions between subject and external environment. In the caudalmost pole of parietal lobe there is area V6. Functional data suggest that this area is related to the encoding of both objects motion and ego-motion. However, the sensitivity of V6 neurons to optic flow stimulations has been tested only in human fMRI experiments. Here we addressed this issue by applying on monkey the same experimental protocol used in human studies. The visual stimulation obtained with the Flow Fields stimulus was the most effective and powerful to activate area V6 in monkey, further strengthening this homology between the two primates. The neighboring areas, V6A and PEc, show different cytoarchitecture and connectivity profiles, but are both involved in the control of reaches. We studied the sensory responses present in these areas, and directly compared these.. We also studied the motor related discharges of PEc neurons during reaching movements in 3D space comparing also the direction and depth tuning of PEc cells with those of V6A. The results show that area PEc and V6A share several functional properties. Area PEc, unlike V6A, contains a richer and more complex somatosensory input, and a poorer, although complex visual one. Differences emerged also comparing the motor-related properties for reaches in depth: the incidence of depth modulations in PEc and the temporal pattern of modulation for depth and direction allow to delineate a trend among the two parietal visuomotor areas.
Resumo:
Alexithymia refers to difficulties in recognizing one’s own emotions and others emotions. Theories of emotional embodiment suggest that, in order to understand other peoples’ feelings, observers re-experience, or simulate, the relevant component (i.e. somatic, motor, visceral) of emotion’s expressed by others in one’s self. In this way, the emotions are “embodied”. Critically, to date, there are no studies investigating the ability of alexithymic individuals in embodying the emotions conveyed by faces. In the present dissertation different implicit paradigms and techniques falling within the field of affective neuroscience have been employed in order to test a possible deficit in the embodiment of emotions in alexithymia while subjects were requested to observe faces manifesting different expression: fear, disgust, happiness and neutral. The level of the perceptual encoding of emotional faces and the embodiment of emotions in the somato-sensory and sensory-motor system have been investigated. Moreover, non-communicative motor reaction to emotional stimuli (i.e. visceral reactions) and interoceptive abilities of alexithymic subjects have been explored. The present dissertation provided convergent evidences in support of a deficit in the processing of fearful expression in subjects with high alexithymic personality traits. Indeed, the pattern of fear induced changes in the perceptual encoding, in the somato-sensory and in the somato-motor system (both the communicative and non communicative one) is widely and consistently altered in alexithymia. This support the hypothesis of a diminished responses to fearful stimuli in alexithymia. In addition, the overall results on happiness and disgust, although preliminary, provided interesting results. Indeed, the results on happiness revealed a defective perceptual encoding, coupled with a slight difficulty (i.e. delayed responses) at the level of the communicative somato-motor system, and the emotion of disgust has been found to be abnormally embodied at the level of the somato-sensory system.
Resumo:
The idea of balancing the resources spent in the acquisition and encoding of natural signals strictly to their intrinsic information content has interested nearly a decade of research under the name of compressed sensing. In this doctoral dissertation we develop some extensions and improvements upon this technique's foundations, by modifying the random sensing matrices on which the signals of interest are projected to achieve different objectives. Firstly, we propose two methods for the adaptation of sensing matrix ensembles to the second-order moments of natural signals. These techniques leverage the maximisation of different proxies for the quantity of information acquired by compressed sensing, and are efficiently applied in the encoding of electrocardiographic tracks with minimum-complexity digital hardware. Secondly, we focus on the possibility of using compressed sensing as a method to provide a partial, yet cryptanalysis-resistant form of encryption; in this context, we show how a random matrix generation strategy with a controlled amount of perturbations can be used to distinguish between multiple user classes with different quality of access to the encrypted information content. Finally, we explore the application of compressed sensing in the design of a multispectral imager, by implementing an optical scheme that entails a coded aperture array and Fabry-Pérot spectral filters. The signal recoveries obtained by processing real-world measurements show promising results, that leave room for an improvement of the sensing matrix calibration problem in the devised imager.
Resumo:
Obiettivo del lavoro è quello di legare tra di loro due aspetti che storicamente sono sempre stati scollegati. Il primo è il lungo dibattito sul tema “oltre il PIL”, che prosegue ininterrottamente da circa mezzo secolo. Il secondo riguarda l’utilizzo dei sistemi di misurazione e valutazione della performance nel settore pubblico italiano. Si illustra l’evoluzione del dibattito sul PIL facendo un excursus storico del pensiero critico che si è sviluppato nel corso di circa cinquanta anni analizzando le ragioni assunte dagli studiosi per confutare l’utilizzo del PIL quale misura universale del benessere. Cogliendo questa suggestione l’Istat, in collaborazione con il CNEL, ha avviato un progetto per individuare nuovi indicatori da affiancare al PIL, in grado di misurare il livello non solo della crescita economica, ma anche del benessere sociale e sostenibile, con l’analisi degli indicatori riferiti a 12 domini di benessere individuati. Al progetto Istat-CNEL si è affiancato il progetto UrBES, promosso dall’Istat e dal Coordinamento dei sindaci metropolitani dell’ANCI, che hanno costituito una rete di città metropolitane per sperimentare la misurazione e il confronto sulla base di indicatori di benessere urbano equo e sostenibile, facendo proprio un progetto del Comune di Bologna e di Laboratorio Urbano (Centro di documentazione, ricerca e proposta sulle città), che ha sottoposto a differenti target un questionario on line, i cui risultati, con riferimento alle risposte fornite alle domande aperte, sono stati elaborati attraverso l’utilizzo di Taltac, un software per l’analisi dei testi, al fine di individuare i “profili” dei rispondenti, associando i risultati dell’elaborazione alle variabili strutturali del questionario. Nell’ultima parte i servizi e progetti erogati dal comune di Bologna sono stati associati alle dimensioni UrBES, per valutare l’impatto delle politiche pubbliche sulla qualità della vita e sul benessere dei cittadini, indicando le criticità legate alla mancanza di dati adeguati.
Resumo:
Surface based measurements systems play a key role in defining the ground truth for climate modeling and satellite product validation. The Italian-French station of Concordia is operative year round since 2005 at Dome C (75°S, 123°E, 3230 m) on the East Antarctic Plateau. A Baseline Surface Radiation Network (BSRN) site was deployed and became operational since January 2006 to measure downwelling components of the radiation budget, and successively was expanded in April 2007 to measure upwelling radiation. Hence, almost a decade of measurement is now available and suitable to define a statistically significant climatology for the radiation budget of Concordia including eventual trends, by specifically assessing the effects of clouds and water vapor on SW and LW net radiation. A well known and robust clear sky-id algorithm (Long and Ackerman, 2000) has been operationally applied on downwelling SW components to identify cloud free events and to fit a parametric equation to determine clear-sky reference along the Antarctic daylight periods (September to April). A new model for surface broadband albedo has been developed in order to better describe the features the area. Then, a novel clear-sky LW parametrization, based on a-priori assumption about inversion layer structure, combined with daily and annual oscillations of the surface temperature, have been adopted and validated. The longwave based method is successively exploited to extend cloud radiative forcing studies to nighttime period (winter). Results indicated inter-annual and intra-annual warming behaviour, i.e. 13.70 W/m2 on the average, specifically approaching neutral effect in summer, when SW CRF compensates LW CRF, and warming along the rest of the year due prevalentely to CRF induced on the LW component.
Resumo:
The paper aims at explaining the adoption of policy programs. We use the garbage can model of organizational choice as our theoretical framework and complement it with the institutional setting of administrative decision-making in order to understand the complex causation of policy program adoption. Institutions distribute decision power by rules and routines and coin actor identities and their interpretations of situations. We therefore expect institutions to play a role when a policy window opens. We explore the configurative explanations for program adoption in a systematic comparison of the adoption of new alcohol policy programs in the Swiss cantons employing Qualitative Comparative Analysis. The most important conditions are the organizational elements of the administrative structure decisive for the coupling of the streams. The results imply that classic bureaucratic structures are better suited to put policies into practice than limited government.
Resumo:
Currently photon Monte Carlo treatment planning (MCTP) for a patient stored in the patient database of a treatment planning system (TPS) can usually only be performed using a cumbersome multi-step procedure where many user interactions are needed. This means automation is needed for usage in clinical routine. In addition, because of the long computing time in MCTP, optimization of the MC calculations is essential. For these purposes a new graphical user interface (GUI)-based photon MC environment has been developed resulting in a very flexible framework. By this means appropriate MC transport methods are assigned to different geometric regions by still benefiting from the features included in the TPS. In order to provide a flexible MC environment, the MC particle transport has been divided into different parts: the source, beam modifiers and the patient. The source part includes the phase-space source, source models and full MC transport through the treatment head. The beam modifier part consists of one module for each beam modifier. To simulate the radiation transport through each individual beam modifier, one out of three full MC transport codes can be selected independently. Additionally, for each beam modifier a simple or an exact geometry can be chosen. Thereby, different complexity levels of radiation transport are applied during the simulation. For the patient dose calculation, two different MC codes are available. A special plug-in in Eclipse providing all necessary information by means of Dicom streams was used to start the developed MC GUI. The implementation of this framework separates the MC transport from the geometry and the modules pass the particles in memory; hence, no files are used as the interface. The implementation is realized for 6 and 15 MV beams of a Varian Clinac 2300 C/D. Several applications demonstrate the usefulness of the framework. Apart from applications dealing with the beam modifiers, two patient cases are shown. Thereby, comparisons are performed between MC calculated dose distributions and those calculated by a pencil beam or the AAA algorithm. Interfacing this flexible and efficient MC environment with Eclipse allows a widespread use for all kinds of investigations from timing and benchmarking studies to clinical patient studies. Additionally, it is possible to add modules keeping the system highly flexible and efficient.
Resumo:
Hypothesis: Early recognition of coagulopathy may improve the care of patients with multiple injuries. Rapid thrombelastography (RapidTEG) is a new variant of thrombelastography (TEG), in which coagulation is initiated by the addition of protein tissue factor. The kinetics of coagulation and the times of measurement were compared for two variants of TEG--RapidTEG and conventional TEG, in which coagulation was initiated with kaolin. The measurements were performed on blood samples from 20 patients with multiple injuries. The RapidTEG results were also compared with conventional measurements of blood coagulation. The mean time for the RapidTEG test was 19.2 +/- 3.1 minutes (mean +/- SD), in comparison with 29.9 +/- 4.3 minutes for kaolin TEG and 34.1 +/- 14.5 minutes for conventional coagulation tests. The mean time for the RapidTEG test was 30.8 +/- 5.72 minutes, in comparison with 41.5 +/- 5.66 minutes for kaolin TEG and 64.9 +/- 18.8 for conventional coagulation tests---measured from admission of the patients to the resuscitation bay until the results were available. There were significant correlations between the RapidTEG results and those from kaolin TEG and conventional coagulation tests. RapidTEG is the most rapid available test for providing reliable information on coagulopathy in patients with multiple injuries. This has implications for improving patient care.
Resumo:
This paper contrasts finite and non-finite complement constructions containing the matrix verb promise. Using data from the British National Corpus, I show that when no explicit mention is made of the promissee the non-finite form of complement is overwhelmingly preferred to its finite counterparts. The exact opposite is the case when the promissee is mentioned between the matrix verb and the complement clause. In addition, the promiser in the x promise y to infinitive construction is almost always pronominal. I suggest that these two facts, the dispreference for the to infinitive form of complement when the promissee is mentioned and the pronominal encoding of the promiser in such cases, are both related to the very rarity of this form of construction in English. Data is adduced showing that another rare construction, the so-called possessive -ing construction, also occurs with a disproportionate number of pronominal subjects. It is suggested that the preference for pronominal subjects in these constructions may be related to a wish to reduce the overall processing complexity of the predications in question.
Resumo:
Health literacy (HL) is context-specific. In public health and health promotion, HL in the private realm refers to individuals' knowledge and skills to prevent disease and to promote health in everyday life. However, there is a scarcity of measurement tools explicitly geared to private realm contexts. Our aim was to develop and test a short survey tool that captures different dimensions of HL in the context of family and friends. We used cross-sectional data from the Swiss Federal Surveys of Adolescents from 2010 to 2011, comprising 7983 males and 366 females between 18 and 25 years. HL was assessed through a set of eight items (self-reports). We used principal component analysis to explore the underlying factor structure among these items in the male sample and confirmatory factor analysis to verify the factor structure in the female sample. The results showed that the tested item set represented dimensions of functional, interactive and critical HL. Two sub-dimensions, understanding versus finding health-relevant information, denoted functional HL. Interactive and critical HL were each represented with two items. A sum score based on all eight items (Cronbach's α: 0.64) showed expected positive associations with own and parental education among males and females (p < 0.05). The short item set appears to be a feasible measurement tool to assess HL in the private realm. Its broader application in survey studies may help to improve our understanding of how this form of HL is distributed in the general population.
Resumo:
Hintergrund: Bei der Durchführung von summativen Prüfungen wird üblicherweise eine Mindestreliabilität von 0,8 gefordert. Bei praktischen Prüfungen wie OSCEs werden manchmal 0,7 akzeptiert (Downing 2004). Doch was kann man sich eigentlich unter der Präzision einer Messung mit einer Reliabilität von 0,7 oder 0,8 vorstellen? Methode: Mittels verschiedener statistischer Methoden wie dem Standardmessfehler oder der Generalisierbarkeitstheorie lässt sich die Reliabilität in ein Konfidenzintervall um eine festgestellte Kandidatenleistung übersetzen (Brennan 2003, Harvill 1991, McManus 2012). Hat ein Kandidat beispielsweise bei einer Prüfung 57 Punkte erreicht, schwankt seine wahre Leistung aufgrund der Messungenauigkeit der Prüfung um diesen Wert (z.B. zwischen 50 und 64 Punkte). Im Bereich der Bestehensgrenze ist die Messgenauigkeit aber besonders wichtig. Läge die Bestehensgrenze in unserem Beispiel bei 60 Punkten, wäre der Kandidat mit 57 Punkten zwar pro forma durchgefallen, allerdings könnte er aufgrund der Schwankungsbreite um seine gemessene Leistung in Wahrheit auch knapp bestanden haben. Überträgt man diese Erkenntnisse auf alle KandidatInnen einer Prüfung, kann man die Anzahl der Grenzfallkandidaten bestimmen, also all jene Kandidatinnen, die mit Ihrem Prüfungsergebnis so nahe an der Bestehensgrenze liegen, dass ihr jeweiliges Prüfungsresultate falsch positiv oder falsch negativ sein kann. Ergebnisse: Die Anzahl der GrenzfallkandidatInnen in einer Prüfung ist, nicht nur von der Reliabilität abhängig, sondern auch von der Leistung der KandidatInnen, der Varianz, dem Abstand der Bestehensgrenze zum Mittelwert und der Schiefe der Verteilung. Es wird anhand von Modelldaten und konkreten Prüfungsdaten der Zusammenhang zwischen der Reliabilität und der Anzahl der Grenzfallkandidaten auch für den Nichtstatistiker verständlich dargestellt. Es wird gezeigt, warum selbst eine Reliabilität von 0.8 in besonderen Situationen keine befriedigende Präzision der Messung bieten wird, während in manchen OSCEs die Reliabilität fast ignoriert werden kann. Schlussfolgerungen: Die Berechnung oder Schätzung der Grenzfallkandidaten anstatt der Reliabilität verbessert auf anschauliche Weise das Verständnis für die Präzision einer Prüfung. Wenn es darum geht, wie viele Stationen ein summativer OSCE benötigt oder wie lange eine MC-Prüfung dauern soll, sind Grenzfallkandidaten ein valideres Entscheidungskriterium als die Reliabilität. Brennan, R.L. (2003) Generalizability Theory. New York, Springer Downing, S.M. (2004) ‘Reliability: on the reproducibility of assessment data’, Medical Education 2004, 38, 1006–12 Harvill, L.M. (1991) ‘Standard Error of Measurement’, Educational Measurement: Issues and Practice, 33-41 McManus, I.C. (2012) ‘The misinterpretation of the standard error of measurement in medical education: A primer on the problems, pitfalls and peculiarities of the three different standard errors of measurement’ Medical teacher, 34, 569 - 76