911 resultados para multi attribute utility theory


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study reports one of the first controlled studies to examine the impact of a school based positive youth development program (Lerner, Fisher, & Weinberg, 2000) on promoting qualitative change in life course experiences as a positive intervention outcome. The study built on a recently proposed relational developmental methodological metanarrative (Overton, 1998) and advances in use of qualitative research methods (Denzin & Lincoln, 2000). The study investigated the use the Life Course Interview (Clausen, 1998) and an integrated qualitative and quantitative data analytic strategy (IQDAS) to provide empirical documentation of the impact the Changing Lives Program on qualitative change in positive identity in a multicultural population of troubled youth in an alternative public high school. The psychosocial life course intervention approach used in this study draws its developmental framework from both psychosocial developmental theory (Erikson, 1968) and life course theory (Elder, 1998) and its intervention strategies from the transformative pedagogy of Freire's (1983/1970). Using the 22 participants in the Intervention Condition and the 10 participants in the Control Condition, RMANOVAs found significantly more positive qualitative change in personal identity for program participants relative to the non-intervention control condition. In addition, the 2X2X2X3 mixed design RMANOVA in which Time (pre, post) was the repeated factor and Condition (Intervention versus Control), Gender, and Ethnicity the between group factors, also found significant interactions for the Time by Gender and Time by Ethnicity. Moreover, the directionality of the basic pattern of change was positive for participants of both genders and all three ethnic groups. The pattern of the moderation effects also indicated a marked tendency for participants in the intervention group to characterize their sense of self as more secure and less negative at the end of the their first semester in the intervention, that was stable across both genders and all three ethnicities. The basic differential pattern of an increase in the intervention condition of a positive characterization of sense of self relative to both pre test and relative to the directionality of the movement of the non-intervention controls, was stable across both genders and all three ethnic groups.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider experimentally and theoretically a refined parameter space near the transition to multi-pulse modelocking. Near the transition, the onset of instability is initiated by a Hopf (periodic) bifurcation. As cavity energy is increased, the band of unstable, oscillatory modes generates a chaotic behavior between single- and multi-pulse operation. Both theory and experiment are in good qualitative agreement and they suggest that the phenomenon is of a universal nature in mode-locked lasers at the onset of multi-pulsing from N to N + 1 pulses per round trip. This is the first theoretical and experimental characterization of the transition behavior, made possible by a highly refined tuning of the gain pump level. © 2010 Copyright SPIE - The International Society for Optical Engineering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.

A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.

Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.

The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).

First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.

Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.

Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.

The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.

To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.

The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.

The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.

Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.

The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.

In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are many sociopolitical theories to help explain why governments and actors do what they do. Securitization Theory is a process-oriented theory in international relations that focuses on how an actor defines another actor as an “existential threat,” and the resulting responses that can be taken in order to address that threat. While Securitization Theory is an acceptable method to analyze the relationships between actors in the international system, this thesis contends that the proper examination is multi-factorial, focusing on the addition of Role Theory to the analysis. Consideration of Role Theory, which is another international relations theory that explains how an actor’s strategies, relationships, and perceptions by others is based on pre-conceptualized definitions of that actor’s identity, is essential in order to fully explain why an actor might respond to another in a particular way. Certain roles an actor may enact produce a rival relationship with other actors in the system, and it is those rival roles that elicit securitized responses. The possibility of a securitized response lessens when a role or a relationship between roles becomes ambiguous. There are clear points of role rivalry and role ambiguity between Hizb’allah and Iran, which has directly impacted, and continues to impact, how the United States (US) responds to these actors. Because of role ambiguity, the US has still not conceptualized an effective way to deal with Hizb’allah and Iran holistically across all its various areas of operation and in its various enacted roles. It would be overly simplistic to see Hizb’allah and Iran solely through one lens depending on which hemisphere or continent one is observing. The reality is likely more nuanced. Both Role Theory and Securitization theory can help to understand and articulate those nuances. By examining two case studies of Hizb’allah and Iran’s enactment of various roles in both the Middle East and Latin America, the situations where roles cause a securitized response and where the response is less securitized due to role ambiguity will become clear. Using this augmented approach of combining both theories, along with supplementing the manner in which an actor, action, or role is analyzed, will produce better methods for policy-making that will be able to address the more ambiguous activities of Hizb’allah and Iran in these two regions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an economic model of the effects of identity and social norms on consumption patterns. By incorporating qualitative studies in psychology and sociology, I propose a utility function that features two components – economic (functional) and identity elements. This setup is extended to analyze a market comprising a continuum of consumers, whose identity distribution along a spectrum of binary identities is described by a Beta distribution. I also introduce the notion of salience in the context of identity and consumption decisions. The key result of the model suggests that fundamental economic parameters, such as price elasticity and market demand, can be altered by identity elements. In addition, it predicts that firms in perfectly competitive markets may associate their products with certain types of identities, in order to reduce product substitutability and attain price-setting power.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This symposium describes a multi-dimensional strategy to examine fidelity of implementation in an authentic school district context. An existing large-district peer mentoring program provides an example. The presentation will address development of a logic model to articulate a theory of change; collaborative creation of a data set aligned with essential concepts and research questions; identification of independent, dependent, and covariate variables; issues related to use of big data that include conditioning and transformation of data prior to analysis; operationalization of a strategy to capture fidelity of implementation data from all stakeholders; and ways in which fidelity indicators might be used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Complex network theory is a framework increasingly used in the study of air transport networks, thanks to its ability to describe the structures created by networks of flights, and their influence in dynamical processes such as delay propagation. While many works consider only a fraction of the network, created by major airports or airlines, for example, it is not clear if and how such sampling process bias the observed structures and processes. In this contribution, we tackle this problem by studying how some observed topological metrics depend on the way the network is reconstructed, i.e. on the rules used to sample nodes and connections. Both structural and simple dynamical properties are considered, for eight major air networks and different source datasets. Results indicate that using a subset of airports strongly distorts our perception of the network, even when just small ones are discarded; at the same time, considering a subset of airlines yields a better and more stable representation. This allows us to provide some general guidelines on the way airports and connections should be sampled.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Queueing Theory is the mathematical study of queues or waiting lines. Queues abound in every day life - in computer networks, in tra c islands, in communication of electro-magnetic signals, in telephone exchange, in bank counters, in super market checkouts, in doctor's clinics, in petrol pumps, in o ces where paper works to be processed and many other places. Originated with the published work of A. K. Erlang in 1909 [16] on congestion in telephone tra c, Queueing Theory has grown tremendously in a century. Its wide range applications includes Operations Research, Computer Science, Telecommunications, Tra c Engineering, Reliability Theory, etc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The complexity of issues surrounding continence management, have been investigated by a UK multi-disciplinary research team working under the project title Tackling Ageing Continence through Theory Tools and Technology (TACT3). The team comprising engineers, chemists, health researchers, designers and social anthropologists is funded by the New Dynamics of Ageing Programme, ‘a seven year multidisciplinary research initiative with the ultimate aim of improving quality of life of older people. The programme is a unique collaboration between five UK Research Councils , and is the largest and most ambitious research programme on ageing ever mounted in the UK’ (www.newdynamics.group.shef.ac.uk). The TACT3 project comprises four work packages that are individually managed by members of the research team. One work package focuses solely on knowledge transfer of the research outputs and the management of the overall project. Another work package, entitled ‘Challenging Environmental Barriers’ has focused on the barriers in the built environment that prevent older people with continence concerns from participating in wider social life, namely access to publicly available toilet facilities. We also have a work package entitled ‘Improving Continence Interventions and Services’ which is exploring patient, carer and service providers experiences in receiving and delivering National Health Service (NHS) continence management treatments. The fourth workpackage ‘Developing Assistive Technologies’ has worked with users to develop devices that promote confidence, improve health and therefore may facilitate greater social interaction for older people with continence management concerns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since Bowlby devised his theory of attachment, originally for clinical purposes, refinements and extensions have developed its clinical utility. The research question asked how experienced contemporary clinicians now perceive the role of attachment in the formulation and treatment of distress by reference to their clinical work. Using grounded theory methodology, underpinned by a relativist, moderate social constructionist epistemology, initial sampling consisted of 16 in-depth interviews with experienced clinicians. The tentative theoretical categories that emerged were then developed in theoretical sampling in further interviews with 5 of the initial interviewees. The final theoretical categories to emerge concerned the prevalence of caregiver-related problems, the provision of safety together with the prioritisation of the relationship with self as attachment-related treatment strategies, and attachment theory’s provision of understanding in problem formulation. Whilst this suggests that attachment-related ideas are integrated in contemporary practice, it also suggests that the clinical utility now offered by attachment theory, as established in the literature, has not found broad appeal amongst clinicians despite the commonness of attachment-related presenting problems. The implications of this are manifold. To begin with, attachment theorists have largely failed to bring the potential now offered by attachment-related therapeutic interventions to the market. This situation makes it incumbent on the next generation of attachment researchers to more clearly articulate techniques with which clinicians, of whatever theoretical orientation, can better leverage attachment-related knowledge in their clinical work. In this enterprise, perhaps the knowledge and experience of expert clinicians could be harvested, as this research has done. Moreover, researchers must expand the evidence base that such interventions actually work. Beyond the implications for clinical utility and efficacy, the findings strengthen counselling psychology’s influence on society’s perception and treatment of attachment-related problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Résumé : Face à l’accroissement de la résolution spatiale des capteurs optiques satellitaires, de nouvelles stratégies doivent être développées pour classifier les images de télédétection. En effet, l’abondance de détails dans ces images diminue fortement l’efficacité des classifications spectrales; de nombreuses méthodes de classification texturale, notamment les approches statistiques, ne sont plus adaptées. À l’inverse, les approches structurelles offrent une ouverture intéressante : ces approches orientées objet consistent à étudier la structure de l’image pour en interpréter le sens. Un algorithme de ce type est proposé dans la première partie de cette thèse. Reposant sur la détection et l’analyse de points-clés (KPC : KeyPoint-based Classification), il offre une solution efficace au problème de la classification d’images à très haute résolution spatiale. Les classifications effectuées sur les données montrent en particulier sa capacité à différencier des textures visuellement similaires. Par ailleurs, il a été montré dans la littérature que la fusion évidentielle, reposant sur la théorie de Dempster-Shafer, est tout à fait adaptée aux images de télédétection en raison de son aptitude à intégrer des concepts tels que l’ambiguïté et l’incertitude. Peu d’études ont en revanche été menées sur l’application de cette théorie à des données texturales complexes telles que celles issues de classifications structurelles. La seconde partie de cette thèse vise à combler ce manque, en s’intéressant à la fusion de classifications KPC multi-échelle par la théorie de Dempster-Shafer. Les tests menés montrent que cette approche multi-échelle permet d’améliorer la classification finale dans le cas où l’image initiale est de faible qualité. De plus, l’étude effectuée met en évidence le potentiel d’amélioration apporté par l’estimation de la fiabilité des classifications intermédiaires, et fournit des pistes pour mener ces estimations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Au cours des dernières années, la photonique intégrée sur silicium a progressé rapidement. Les modulateurs issus de cette technologie présentent des caractéristiques potentiellement intéressantes pour les systèmes de communication à courte portée. En effet, il est prévu que ces modulateurs pourront être opérés à des vitesses de transmission élevées, tout en limitant le coût de fabrication et la consommation de puissance. Parallèlement, la modulation d’amplitude multi-niveau (PAM) est prometteuse pour ce type de systèmes. Ainsi, ce travail porte sur le développement de modulateurs de silicium pour la transmission de signaux PAM. Dans le premier chapitre, les concepts théoriques nécessaires à la conception de modulateurs de silicium sont présentés. Les modulateurs Mach-Zehnder et les modulateurs à base de réseau de Bragg sont principalement abordés. De plus, les effets électro-optiques dans le silicium, la modulation PAM, les différents types d’électrodes intégrées et la compensation des distorsions par traitement du signal sont détaillés.Dans le deuxième chapitre, un modulateur Mach-Zehnder aux électrodes segmentées est présenté. La segmentation des électrodes permet la génération de signaux optiques PAM à partir de séquences binaires. Cette approche permet d’éliminer l’utilisation de convertisseur numérique-analogique en intégrant cette fonction dans le domaine optique, ce qui vise à réduire le coût du système de communication. Ce chapitre contient la description détaillée du modulateur, les résultats de caractérisation optique et de la caractérisation électrique, ainsi que les tests systèmes. De plus, les tests systèmes incluent l’utilisation de pré-compensation ou de post-compensation du signal sous la forme d’égalisation de la réponse en fréquence pour les formats de modulation PAM-4 et PAM-8 à différents taux binaires. Une vitesse de transmission de 30 Gb/s est démontrée dans les deux cas et ce malgré une limitation importante de la réponse en fréquence suite à l’ajout d’un assemblage des circuits radiofréquences (largeur de bande 3 dB de 8 GHz). Il s’agit de la première démonstration de modulation PAM-8 à l’aide d’un modulateur Mach-Zehnder aux électrodes segmentées. Finalement, les conclusions tirées de ce travail ont mené à la conception d’un deuxième modulateur Mach-Zehnder aux électrodes segmentées présentement en phase de test, dont les performances montrent un très grand potentiel. Dans le troisième chapitre, un modulateur à réseau de Bragg à deux sauts de phase est présenté. L’utilisation de réseaux de Bragg est une approche encore peu développée pour la modulation. En effet, la réponse spectrale de ces structures peut être contrôlée précisément, une caractéristique intéressante pour la conception de modulateurs. Dans ces travaux, nous proposons l’ajout de deux sauts de phase à un réseau de Bragg uniforme pour obtenir un pic de transmission dans la bande de réflexion de celui-ci. Ainsi, il est possible d’altérer l’amplitude du pic de transmission à l’aide d’une jonction pn. Comme pour le deuxième chapitre, ce chapitre inclut la description détaillée du modulateur, les résultats des caractérisations optique et électrique, ainsi que les tests systèmes. De plus, la caractérisation de jonctions pn à l’aide du modulateur à réseau de Bragg est expliquée. Des vitesses de transmission PAM-4 de 60 Gb/s et OOK de 55 Gb/s sont démontrées après la compensation des distorsions des signaux. À notre connaissance, il s’agit du modulateur à réseau de Bragg le plus rapide à ce jour. De plus, pour la première fois, les performances d’un tel modulateur s’approchent de celles des modulateurs de silicium les plus rapides utilisant des microrésonateurs en anneau ou des interféromètres Mach-Zehnder.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces systems of exchange values as tools for the organization of multi-agent systems. Systems of exchange values are defined on the basis of the theory of social exchanges, developed by Piaget and Homans. A model of social organization is proposed, where social relations are construed as social exchanges and exchange values are put into use in the support of the continuity of the performance of social exchanges. The dynamics of social organizations is formulated in terms of the regulation of exchanges of values, so that social equilibrium is connected to the continuity of the interactions. The concept of supervisor of social equilibrium is introduced as a centralized mechanism for solving the problem of the equilibrium of the organization The equilibrium supervisor solves such problem making use of a qualitative Markov Decision Process that uses numerical intervals for the representation of exchange values.