890 resultados para Tchebyshev metrics
Resumo:
This research studied the project performance measurement from the perspective of strategic management. The objective was to find a generic model for project performance measurement that emphasizes strategy and decision making. Research followed the guidelines of a constructive research methodology. As a result, the study suggests a model that measures projects with multiple meters during and after projects. Measurement after the project is suggested to be linked to the strategic performance measures of a company. The measurement should be conducted with centralized project portfolio management e.g. using the project management office in the organization. Metrics, after the project, measure the project’s actual benefit realization. During the project, the metrics are universal and they measure the accomplished objectives relation to costs, schedule and internal resource usage. Outcomes of these measures should be forecasted by using qualitative or stochastic methods. Solid theoretical background for the model was found from the literature that covers the subjects of performance measurement, projects and uncertainty. The study states that the model can be implemented in companies. This statement is supported by empirical evidence from a single case study. The gathering of empiric evidence about the actual usefulness of the model in companies is left to be done by the evaluative research in the future.
Resumo:
The vast majority of our contemporary society owns a mobile phone, which has resulted in a dramatic rise in the amount of networked computers in recent years. Security issues in the computers have followed the same trend and nearly everyone is now affected by such issues. How could the situation be improved? For software engineers, an obvious answer is to build computer software with security in mind. A problem with building software with security is how to define secure software or how to measure security. This thesis divides the problem into three research questions. First, how can we measure the security of software? Second, what types of tools are available for measuring security? And finally, what do these tools reveal about the security of software? Measuring tools of these kind are commonly called metrics. This thesis is focused on the perspective of software engineers in the software design phase. Focus on the design phase means that code level semantics or programming language specifics are not discussed in this work. Organizational policy, management issues or software development process are also out of the scope. The first two research problems were studied using a literature review while the third was studied using a case study research. The target of the case study was a Java based email server called Apache James, which had details from its changelog and security issues available and the source code was accessible. The research revealed that there is a consensus in the terminology on software security. Security verification activities are commonly divided into evaluation and assurance. The focus of this work was in assurance, which means to verify one’s own work. There are 34 metrics available for security measurements, of which five are evaluation metrics and 29 are assurance metrics. We found, however, that the general quality of these metrics was not good. Only three metrics in the design category passed the inspection criteria and could be used in the case study. The metrics claim to give quantitative information on the security of the software, but in practice they were limited to evaluating different versions of the same software. Apart from being relative, the metrics were unable to detect security issues or point out problems in the design. Furthermore, interpreting the metrics’ results was difficult. In conclusion, the general state of the software security metrics leaves a lot to be desired. The metrics studied had both theoretical and practical issues, and are not suitable for daily engineering workflows. The metrics studied provided a basis for further research, since they pointed out areas where the security metrics were necessary to improve whether verification of security from the design was desired.
Resumo:
The objective of the study is to extend the existing hedging literature of the commodity price risks by investigating what kind of hedging strategies can be used in companies using bitumen as raw material in their production. Five different alternative swap hedging strategies in bitumen markets are empirically tested. Strategies tested are full hedge strategy, simple, conservative, and aggressive term structure strategies, and implied volatility strategy. The effectiveness of the alternative strategies is measured by excess returns compared to no hedge strategy. In addition, the downside risk of each strategy is measured with target absolute semi-deviation. Results indicate that any of the tested strategies does not outperform the no hedge strategy in terms of excess returns in all maturities. The best-performing aggressive term structure strategy succeeds to create positive excess returns only in short maturities. However, risk seems to increase hand-in-hand with the excess returns so that the best-performing strategies get the highest risk metrics as well. This implicates that the company willing to gain from favorable price movements must be ready to bear a greater risk. Thus, no superior hedging strategy over the others is found.
Resumo:
Verkkopalveluiden ylläpitovaiheessa halutaan varmistua, etteivät palveluun tehdyt muutokset aiheuta verkkopalvelussa virhetilanteita ja palvelu toimii moitteetta. Muutoksen hyväksyntätestaus voidaan tehdä regressiotestauksena vertaamalla palvelun tilaa ennen ja jälkeen muutoksen. Sisältöpainotteisessa verkkopalvelussa testaaminen keskittyy loppukäyttäjälle esitetyn sivun semanttiseen sekä visuaaliseen oikeellisuuteen sekä erilaisiin toiminnallisiin testeihin. Työssä tarkastellaan etenkin suositulla WordPress-julkaisujärjestelmällä toteutettujen verkkopalveluiden ylläpitoa. Keskeisenä osana julkaisujärjestelmillä toteutettujen verkkopalveluiden ylläpitoa on julkaisujärjestelmän ja sitä täydentävien lisäosien päivittämistä ajantasaisiin versioihin. Nämä päivitykset paitsi tuovat uusia ominaisuuksia verkkopalvelun kehittäjille, myös paikkaavat järjestelmän tietoturvahaavoittuvuuksia sekä korjaavat aiemmissa versioissa esiintyneitä virheitä. Tässä työssä kehitettiin kohdeyrityksen aiempia verkkopalveluiden ylläpitoprosesseja niissä tunnistettujen kehityskohteiden perusteella. Uudistettu kokonaisuus jakautuu kahteen kokonaisuuteen: päivitystarpeen seurantaan sekä päivitysten tekemiseen. Päivitystarpeen seurantaa varten kehitettiin uusi työkalu helpottamaan kokonaiskuvan hahmottamista. Päivitysten tekemisen osalta työssä keskityttiin automatisoidun regressiotestauksen kehittämiseen, missä tärkeimpänä testauskeinona käytetään verkkopalvelusta tallennettujen kuvankaappausten vertailuun perustuvaa visuaalista testausta. Uusien ylläpitoprosesseille määriteltiin myös seurannan kohteet uudistuksen onnistumisen ja jatkokehityksen arviointia varten.
Resumo:
While red-green-blue (RGB) image of retina has quite limited information, retinal multispectral images provide both spatial and spectral information which could enhance the capability of exploring the eye-related problems in their early stages. In this thesis, two learning-based algorithms for reconstructing of spectral retinal images from the RGB images are developed by a two-step manner. First, related previous techniques are reviewed and studied. Then, the most suitable methods are enhanced and combined to have new algorithms for the reconstruction of spectral retinal images. The proposed approaches are based on radial basis function network to learn a mapping from tristimulus colour space to multi-spectral space. The resemblance level of reproduced spectral images and original images is estimated using spectral distance metrics spectral angle mapper, spectral correlation mapper, and spectral information divergence, which show a promising result from the suggested algorithms.
Resumo:
This study examines the efficiency of search engine advertising strategies employed by firms. The research setting is the online retailing industry, which is characterized by extensive use of Web technologies and high competition for market share and profitability. For Internet retailers, search engines are increasingly serving as an information gateway for many decision-making tasks. In particular, Search engine advertising (SEA) has opened a new marketing channel for retailers to attract new customers and improve their performance. In addition to natural (organic) search marketing strategies, search engine advertisers compete for top advertisement slots provided by search brokers such as Google and Yahoo! through keyword auctions. The rationale being that greater visibility on a search engine during a keyword search will capture customers' interest in a business and its product or service offerings. Search engines account for most online activities today. Compared with the slow growth of traditional marketing channels, online search volumes continue to grow at a steady rate. According to the Search Engine Marketing Professional Organization, spending on search engine marketing by North American firms in 2008 was estimated at $13.5 billion. Despite the significant role SEA plays in Web retailing, scholarly research on the topic is limited. Prior studies in SEA have focused on search engine auction mechanism design. In contrast, research on the business value of SEA has been limited by the lack of empirical data on search advertising practices. Recent advances in search and retail technologies have created datarich environments that enable new research opportunities at the interface of marketing and information technology. This research uses extensive data from Web retailing and Google-based search advertising and evaluates Web retailers' use of resources, search advertising techniques, and other relevant factors that contribute to business performance across different metrics. The methods used include Data Envelopment Analysis (DEA), data mining, and multivariate statistics. This research contributes to empirical research by analyzing several Web retail firms in different industry sectors and product categories. One of the key findings is that the dynamics of sponsored search advertising vary between multi-channel and Web-only retailers. While the key performance metrics for multi-channel retailers include measures such as online sales, conversion rate (CR), c1ick-through-rate (CTR), and impressions, the key performance metrics for Web-only retailers focus on organic and sponsored ad ranks. These results provide a useful contribution to our organizational level understanding of search engine advertising strategies, both for multi-channel and Web-only retailers. These results also contribute to current knowledge in technology-driven marketing strategies and provide managers with a better understanding of sponsored search advertising and its impact on various performance metrics in Web retailing.
Resumo:
Given the significant growth of the Internet in recent years, marketers have been striving for new techniques and strategies to prosper in the online world. Statistically, search engines have been the most dominant channels of Internet marketing in recent years. However, the mechanics of advertising in such a market place has created a challenging environment for marketers to position their ads among their competitors. This study uses a unique cross-sectional dataset of the top 500 Internet retailers in North America and hierarchical multiple regression analysis to empirically investigate the effect of keyword competition on the relationship between ad position and its determinants in the sponsored search market. To this end, the study utilizes the literature in consumer search behavior, keyword auction mechanism design, and search advertising performance as the theoretical foundation. This study is the first of its kind to examine the sponsored search market characteristics in a cross-sectional setting where the level of keyword competition is explicitly captured in terms of the number of Internet retailers competing for similar keywords. Internet retailing provides an appropriate setting for this study given the high-stake battle for market share and intense competition for keywords in the sponsored search market place. The findings of this study indicate that bid values and ad relevancy metrics as well as their interaction affect the position of ads on the search engine result pages (SERPs). These results confirm some of the findings from previous studies that examined sponsored search advertising performance at a keyword level. Furthermore, the study finds that the position of ads for web-only retailers is dependent on bid values and ad relevancy metrics, whereas, multi-channel retailers are more reliant on their bid values. This difference between web-only and multi-channel retailers is also observed in the moderating effect of keyword competition on the relationships between ad position and its key determinants. Specifically, this study finds that keyword competition has significant moderating effects only for multi-channel retailers.
Resumo:
Octopamine (OA) and tyramine (TA) play important roles in homeostatic mechanisms, behavior, and modulation of neuromuscular junctions in arthropods. However, direct actions of these amines on muscle force production that are distinct from effects at the neuromuscular synapse have not been well studied. We utilize the technical benefits of the Drosophila larval preparation to distinguish the effects of OA and TA on the neuromuscular synapse from their effects on contractility of muscle cells. In contrast to the slight and often insignificant effects of TA, the action of OA was profound across all metrics assessed. We demonstrate that exogenous OA application decreases the input resistance of larval muscle fibers, increases the amplitude of excitatory junction potentials (EJPs), augments contraction force and duration, and at higher concentrations (10−5 and 10−4 M) affects muscle cells 12 and 13 more than muscle cells 6 and 7. Similarly, OA increases the force of synaptically driven contractions in a cell-specific manner. Moreover, such augmentation of contractile force persisted during direct muscle depolarization concurrent with synaptic block. OA elicited an even more profound effect on basal tonus. Application of 10−5 M OA increased synaptically driven contractions by ∼1.1 mN but gave rise to a 28-mN increase in basal tonus in the absence of synaptic activation. Augmentation of basal tonus exceeded any physiological stimulation paradigm and can potentially be explained by changes in intramuscular protein mechanics. Thus we provide evidence for independent but complementary effects of OA on chemical synapses and muscle contractility.
Resumo:
Self-regulation is considered a powerful predictor of behavioral and mental health outcomes during adolescence and emerging adulthood. In this dissertation I address some electrophysiological and genetic correlates of this important skill set in a series of four studies. Across all studies event-related potentials (ERPs) were recorded as participants responded to tones presented in attended and unattended channels in an auditory selective attention task. In Study 1, examining these ERPs in relation to parental reports on the Behavior Rating Inventory of Executive Function (BRIEF) revealed that an early frontal positivity (EFP) elicited by to-be-ignored/unattended tones was larger in those with poorer self-regulation. As is traditionally found, N1 amplitudes were more negative for the to-be-attended rather than unattended tones. Additionally, N1 latencies to unattended tones correlated with parent-ratings on the BRIEF, where shorter latencies predicted better self-regulation. In Study 2 I tested a model of the associations between selfregulation scores and allelic variations in monoamine neurotransmitter genes, and their concurrent links to ERP markers of attentional control. Allelic variations in dopaminerelated genes predicted both my ERP markers and self-regulatory variables, and played a moderating role in the association between the two. In Study 3 I examined whether training in Integra Mindfulness Martial Arts, an intervention program which trains elements of self-regulation, would lead to improvement in ERP markers of attentional control and parent-report BRIEF scores in a group of adolescents with self-regulatory difficulties. I found that those in the treatment group amplified their processing of attended relative to unattended stimuli over time, and reduced their levels of problematic behaviour whereas those in the waitlist control group showed little to no change on both of these metrics. In Study 4 I examined potential associations between self-regulation and attentional control in a group of emerging adults. Both event-related spectral perturbations (ERSPs) and intertrial coherence (ITC) in the alpha and theta range predicted individual differences in self-regulation. Across the four studies I was able to conclude that real-world self-regulation is indeed associated with the neural markers of attentional control. Targeted interventions focusing on attentional control may improve self-regulation in those experiencing difficulties in this regard.
Characterizing Dynamic Optimization Benchmarks for the Comparison of Multi-Modal Tracking Algorithms
Resumo:
Population-based metaheuristics, such as particle swarm optimization (PSO), have been employed to solve many real-world optimization problems. Although it is of- ten sufficient to find a single solution to these problems, there does exist those cases where identifying multiple, diverse solutions can be beneficial or even required. Some of these problems are further complicated by a change in their objective function over time. This type of optimization is referred to as dynamic, multi-modal optimization. Algorithms which exploit multiple optima in a search space are identified as niching algorithms. Although numerous dynamic, niching algorithms have been developed, their performance is often measured solely on their ability to find a single, global optimum. Furthermore, the comparisons often use synthetic benchmarks whose landscape characteristics are generally limited and unknown. This thesis provides a landscape analysis of the dynamic benchmark functions commonly developed for multi-modal optimization. The benchmark analysis results reveal that the mechanisms responsible for dynamism in the current dynamic bench- marks do not significantly affect landscape features, thus suggesting a lack of representation for problems whose landscape features vary over time. This analysis is used in a comparison of current niching algorithms to identify the effects that specific landscape features have on niching performance. Two performance metrics are proposed to measure both the scalability and accuracy of the niching algorithms. The algorithm comparison results demonstrate the algorithms best suited for a variety of dynamic environments. This comparison also examines each of the algorithms in terms of their niching behaviours and analyzing the range and trade-off between scalability and accuracy when tuning the algorithms respective parameters. These results contribute to the understanding of current niching techniques as well as the problem features that ultimately dictate their success.
Resumo:
Octopamine (OA) and tyramine (TA) play important roles in homeostatic mechanisms, behavior, and modulation of neuromuscular junctions in arthropods. However, direct actions of these amines on muscle force production that are distinct from effects at the neuromuscular synapse have not been well studied. We utilize the technical benefits of the Drosophila larval preparation to distinguish the effects of OA and TA on the neuromuscular synapse from their effects on contractility of muscle cells. In contrast to the slight and often insignificant effects of TA, the action of OA was profound across all metrics assessed. We demonstrate that exogenous OA application decreases the input resistance of larval muscle fibers, increases the amplitude of excitatory junction potentials (EJPs), augments contraction force and duration, and at higher concentrations (10(-5) and 10(-4) M) affects muscle cells 12 and 13 more than muscle cells 6 and 7. Similarly, OA increases the force of synaptically driven contractions in a cell-specific manner. Moreover, such augmentation of contractile force persisted during direct muscle depolarization concurrent with synaptic block. OA elicited an even more profound effect on basal tonus. Application of 10(-5) M OA increased synaptically driven contractions by ≈ 1.1 mN but gave rise to a 28-mN increase in basal tonus in the absence of synaptic activation. Augmentation of basal tonus exceeded any physiological stimulation paradigm and can potentially be explained by changes in intramuscular protein mechanics. Thus we provide evidence for independent but complementary effects of OA on chemical synapses and muscle contractility.
Resumo:
We assess the predictive ability of three VPIN metrics on the basis of two highly volatile market events of China, and examine the association between VPIN and toxic-induced volatility through conditional probability analysis and multiple regression. We examine the dynamic relationship on VPIN and high-frequency liquidity using Vector Auto-Regression models, Granger Causality tests, and impulse response analysis. Our results suggest that Bulk Volume VPIN has the best risk-warning effect among major VPIN metrics. VPIN has a positive association with market volatility induced by toxic information flow. Most importantly, we document a positive feedback effect between VPIN and high-frequency liquidity, where a negative liquidity shock boosts up VPIN, which, in turn, leads to further liquidity drain. Our study provides empirical evidence that reflects an intrinsic game between informed traders and market makers when facing toxic information in the high-frequency trading world.
Resumo:
De récentes découvertes montrent le rôle important que joue l’acide ribonucléique (ARN) au sein des cellules, que ce soit le contrôle de l’expression génétique, la régulation de plusieurs processus homéostasiques, en plus de la transcription et la traduction de l’acide désoxyribonucléique (ADN) en protéine. Si l’on veut comprendre comment la cellule fonctionne, nous devons d’abords comprendre ses composantes et comment ils interagissent, et en particulier chez l’ARN. La fonction d’une molécule est tributaire de sa structure tridimensionnelle (3D). Or, déterminer expérimentalement la structure 3D d’un ARN s’avère fort coûteux. Les méthodes courantes de prédiction par ordinateur de la structure d’un ARN ne tiennent compte que des appariements classiques ou canoniques, similaires à ceux de la fameuse structure en double-hélice de l’ADN. Ici, nous avons amélioré la prédiction de structures d’ARN en tenant compte de tous les types possibles d’appariements, dont ceux dits non-canoniques. Cela est rendu possible dans le contexte d’un nouveau paradigme pour le repliement des ARN, basé sur les motifs cycliques de nucléotides ; des blocs de bases pour la construction des ARN. De plus, nous avons dévelopées de nouvelles métriques pour quantifier la précision des méthodes de prédiction des structures 3D des ARN, vue l’introduction récente de plusieurs de ces méthodes. Enfin, nous avons évalué le pouvoir prédictif des nouvelles techniques de sondage de basse résolution des structures d’ARN.
Resumo:
Le projet de recherche porte sur l'étude des problèmes de conception et de planification d'un réseau optique de longue distance, aussi appelé réseau de coeur (OWAN-Optical Wide Area Network en anglais). Il s'agit d'un réseau qui transporte des flots agrégés en mode commutation de circuits. Un réseau OWAN relie différents sites à l'aide de fibres optiques connectées par des commutateurs/routeurs optiques et/ou électriques. Un réseau OWAN est maillé à l'échelle d'un pays ou d’un continent et permet le transit des données à très haut débit. Dans une première partie du projet de thèse, nous nous intéressons au problème de conception de réseaux optiques agiles. Le problème d'agilité est motivé par la croissance de la demande en bande passante et par la nature dynamique du trafic. Les équipements déployés par les opérateurs de réseaux doivent disposer d'outils de configuration plus performants et plus flexibles pour gérer au mieux la complexité des connexions entre les clients et tenir compte de la nature évolutive du trafic. Souvent, le problème de conception d'un réseau consiste à prévoir la bande passante nécessaire pour écouler un trafic donné. Ici, nous cherchons en plus à choisir la meilleure configuration nodale ayant un niveau d'agilité capable de garantir une affectation optimale des ressources du réseau. Nous étudierons également deux autres types de problèmes auxquels un opérateur de réseau est confronté. Le premier problème est l'affectation de ressources du réseau. Une fois que l'architecture du réseau en termes d'équipements est choisie, la question qui reste est de savoir : comment dimensionner et optimiser cette architecture pour qu'elle rencontre le meilleur niveau possible d'agilité pour satisfaire toute la demande. La définition de la topologie de routage est un problème d'optimisation complexe. Elle consiste à définir un ensemble de chemins optiques logiques, choisir les routes physiques suivies par ces derniers, ainsi que les longueurs d'onde qu'ils utilisent, de manière à optimiser la qualité de la solution obtenue par rapport à un ensemble de métriques pour mesurer la performance du réseau. De plus, nous devons définir la meilleure stratégie de dimensionnement du réseau de façon à ce qu'elle soit adaptée à la nature dynamique du trafic. Le second problème est celui d'optimiser les coûts d'investissement en capital(CAPEX) et d'opération (OPEX) de l'architecture de transport proposée. Dans le cas du type d'architecture de dimensionnement considérée dans cette thèse, le CAPEX inclut les coûts de routage, d'installation et de mise en service de tous les équipements de type réseau installés aux extrémités des connexions et dans les noeuds intermédiaires. Les coûts d'opération OPEX correspondent à tous les frais liés à l'exploitation du réseau de transport. Étant donné la nature symétrique et le nombre exponentiel de variables dans la plupart des formulations mathématiques développées pour ces types de problèmes, nous avons particulièrement exploré des approches de résolution de type génération de colonnes et algorithme glouton qui s'adaptent bien à la résolution des grands problèmes d'optimisation. Une étude comparative de plusieurs stratégies d'allocation de ressources et d'algorithmes de résolution, sur différents jeux de données et de réseaux de transport de type OWAN démontre que le meilleur coût réseau est obtenu dans deux cas : une stratégie de dimensionnement anticipative combinée avec une méthode de résolution de type génération de colonnes dans les cas où nous autorisons/interdisons le dérangement des connexions déjà établies. Aussi, une bonne répartition de l'utilisation des ressources du réseau est observée avec les scénarios utilisant une stratégie de dimensionnement myope combinée à une approche d'allocation de ressources avec une résolution utilisant les techniques de génération de colonnes. Les résultats obtenus à l'issue de ces travaux ont également démontré que des gains considérables sont possibles pour les coûts d'investissement en capital et d'opération. En effet, une répartition intelligente et hétérogène de ressources d’un réseau sur l'ensemble des noeuds permet de réaliser une réduction substantielle des coûts du réseau par rapport à une solution d'allocation de ressources classique qui adopte une architecture homogène utilisant la même configuration nodale dans tous les noeuds. En effet, nous avons démontré qu'il est possible de réduire le nombre de commutateurs photoniques tout en satisfaisant la demande de trafic et en gardant le coût global d'allocation de ressources de réseau inchangé par rapport à l'architecture classique. Cela implique une réduction substantielle des coûts CAPEX et OPEX. Dans nos expériences de calcul, les résultats démontrent que la réduction de coûts peut atteindre jusqu'à 65% dans certaines jeux de données et de réseau.
Resumo:
La thèse délaisse l’étude des biais, des erreurs et des influences externes qui modulent les décisions de justice et formule l’hypothèse que les individus, confrontés à un dilemme normatif (quelle serait la juste peine?), manifestent un souci de justice qu’il est pertinent d’analyser en lui-même. Les résultats de cette thèse indiquent qu’une proportion appréciable des choix et des jugements des citoyens et des acteurs judiciaires interrogés témoignent, en raison de leur cohérence interne et de leur modération, d’un souci manifeste de justice. Les données de la thèse s’appuient sur un sondage sentenciel dans lequel on demandait à des répondants du public (n=297), mais aussi à un échantillon d’acteurs judiciaires (n=235), de prendre des décisions de détermination pénale dans trois histoires de cas bien détaillées. La thèse s’intéresse à la détermination de la juste peine, laquelle incorpore trois prises de décision distinctes. Le premier chapitre de la thèse s’attarde à la qualité des échelles individuelles de sévérité des peines qui peuvent être infligées pour sanctionner un délinquant reconnu coupable d’actes criminels. Les résultats indiquent que les citoyens, tous comme les acteurs judiciaires, n’utilisent pas la même métrique pour statuer sur la sévérité des peines, mais que certains d’entre eux, font usage d’une métrique pénale plus cohérente et plus raisonnable que d’autres. Un test décisif pour jauger de la valeur d’une métrique est son aptitude à établir des équivalences pénales entre les peines de prison, de probation, de travaux communautaires et d’amendes. Le deuxième chapitre s’attarde à la qualité des choix sentenciels des citoyens et des acteurs judiciaires. Deux critères sont utilisés pour distinguer les sentences les plus justes : 1) le critère de proportionnalité ou de cohérence interne (les sentences données sont-elles toujours proportionnelles à la gravité de l’infraction commise ?); 2) le critère de modération ou de cohérence externe (la sentence donnée peut-elle rallier le plus grand nombre de points de vue?). Les deux critères sont importants parce qu’ils contribuent tous deux à réduire la marge d’incertitude du dilemme sentenciel. Le troisième chapitre prend acte que toute sentence fera possiblement l’objet d’un aménagement subséquent. Les formes les plus manifestes d’aménagement pénal sont régies par l’octroi d’une libération conditionnelle qui écourte la durée de peine qui sera purgée en prison. Certains acteurs judiciaires choisiront de tenir compte de cette libération anticipée en gonflant leur sentence, alors que d’autres se refuseront à le faire. Le dernier chapitre s’attarde aux raisons qui motivent leurs choix.