928 resultados para critical path methods
Resumo:
Objective: To evaluate the effectiveness and safety of correction of pectus excavatum by the Nuss technique based on the available scientific evidence.Methods: We conducted an evidence synthesis following systematic processes of search, selection, extraction and critical appraisal. Outcomes were classified by importance and had their quality assessed by the Grading of Recommendations Assessment, Development and Evaluation (GRADE).Results: The process of selection of items led to the inclusion of only one systematic review, which synthesized the results of nine observational studies comparing the Nuss and Ravitch procedures. The evidence found was rated as poor and very poor quality. The Nuss procedure has increased the incidence of hemothorax (RR = 5.15; 95% CI: 1.07; 24.89), pneumothorax (RR = 5.26; 95% CI: 1.55; 17.92) and the need for reintervention (RR = 4.88; 95% CI: 2.41; 9.88) when compared to the Ravitch. There was no statistical difference between the two procedures in outcomes: general complications, blood transfusion, hospital stay and time to ambulation. The Nuss operation was faster than the Ravitch (mean difference [MD] = -69.94 minutes, 95% CI: -139.04, -0.83).Conclusion: In the absence of well-designed prospective studies to clarify the evidence, especially in terms of aesthetics and quality of life, surgical indication should be individualized and the choice of the technique based on patient preference and experience of the team.
Resumo:
The purpose of this thesis is twofold. The first and major part is devoted to sensitivity analysis of various discrete optimization problems while the second part addresses methods applied for calculating measures of solution stability and solving multicriteria discrete optimization problems. Despite numerous approaches to stability analysis of discrete optimization problems two major directions can be single out: quantitative and qualitative. Qualitative sensitivity analysis is conducted for multicriteria discrete optimization problems with minisum, minimax and minimin partial criteria. The main results obtained here are necessary and sufficient conditions for different stability types of optimal solutions (or a set of optimal solutions) of the considered problems. Within the framework of quantitative direction various measures of solution stability are investigated. A formula for a quantitative characteristic called stability radius is obtained for the generalized equilibrium situation invariant to changes of game parameters in the case of the H¨older metric. Quality of the problem solution can also be described in terms of robustness analysis. In this work the concepts of accuracy and robustness tolerances are presented for a strategic game with a finite number of players where initial coefficients (costs) of linear payoff functions are subject to perturbations. Investigation of stability radius also aims to devise methods for its calculation. A new metaheuristic approach is derived for calculation of stability radius of an optimal solution to the shortest path problem. The main advantage of the developed method is that it can be potentially applicable for calculating stability radii of NP-hard problems. The last chapter of the thesis focuses on deriving innovative methods based on interactive optimization approach for solving multicriteria combinatorial optimization problems. The key idea of the proposed approach is to utilize a parameterized achievement scalarizing function for solution calculation and to direct interactive procedure by changing weighting coefficients of this function. In order to illustrate the introduced ideas a decision making process is simulated for three objective median location problem. The concepts, models, and ideas collected and analyzed in this thesis create a good and relevant grounds for developing more complicated and integrated models of postoptimal analysis and solving the most computationally challenging problems related to it.
Resumo:
Nowadays, computer-based systems tend to become more complex and control increasingly critical functions affecting different areas of human activities. Failures of such systems might result in loss of human lives as well as significant damage to the environment. Therefore, their safety needs to be ensured. However, the development of safety-critical systems is not a trivial exercise. Hence, to preclude design faults and guarantee the desired behaviour, different industrial standards prescribe the use of rigorous techniques for development and verification of such systems. The more critical the system is, the more rigorous approach should be undertaken. To ensure safety of a critical computer-based system, satisfaction of the safety requirements imposed on this system should be demonstrated. This task involves a number of activities. In particular, a set of the safety requirements is usually derived by conducting various safety analysis techniques. Strong assurance that the system satisfies the safety requirements can be provided by formal methods, i.e., mathematically-based techniques. At the same time, the evidence that the system under consideration meets the imposed safety requirements might be demonstrated by constructing safety cases. However, the overall safety assurance process of critical computerbased systems remains insufficiently defined due to the following reasons. Firstly, there are semantic differences between safety requirements and formal models. Informally represented safety requirements should be translated into the underlying formal language to enable further veri cation. Secondly, the development of formal models of complex systems can be labour-intensive and time consuming. Thirdly, there are only a few well-defined methods for integration of formal verification results into safety cases. This thesis proposes an integrated approach to the rigorous development and verification of safety-critical systems that (1) facilitates elicitation of safety requirements and their incorporation into formal models, (2) simplifies formal modelling and verification by proposing specification and refinement patterns, and (3) assists in the construction of safety cases from the artefacts generated by formal reasoning. Our chosen formal framework is Event-B. It allows us to tackle the complexity of safety-critical systems as well as to structure safety requirements by applying abstraction and stepwise refinement. The Rodin platform, a tool supporting Event-B, assists in automatic model transformations and proof-based verification of the desired system properties. The proposed approach has been validated by several case studies from different application domains.
Resumo:
Traditional methods for studying the magnetic shape memory (MSM) alloys Ni-Mn-Ga include subjecting the entire sample to a uniform magnetic field or completely actuating the sample mechanically. These methods have produced significant results in characterizing the MSM effect, the properties of Ni-Mn-Ga and have pioneered the development of applications from this material. Twin boundaries and their configuration within a Ni-Mn-Ga sample are a key component in the magnetic shape memory effect. Applications that are developed require an understanding of twin boundary characteristics and, more importantly, the ability to predictably control them. Twins have such a critical role that the twinning stress of a Ni-Mn-Ga crystal is the defining characteristic that indicates its quality and significant research has been conducted to minimize this property. This dissertation reports a decrease in the twinning stress, predictably controlling the twin configuration and characterizing the dynamics of twin boundaries. A reduction of the twinning stress is demonstrated by the discovery of Type II twins within Ni-Mn-Ga which have as little as 10% of the twinning stress of traditional Type I twins. Furthermore, new methods of actuating a Ni-Mn-Ga element using localized unidirectional or bidirectional magnetic fields were developed that can predictably control the twin configuration in a localized area of a Ni-Mn-Ga element. This method of controlling the local twin configuration was used in the characterization of twin boundary dynamics. Using a localized magnetic pulse, the velocity and acceleration of a single twin boundary were measured to be 82.5 m/s and 2.9 × 107 m/s2, and the time needed for the twin boundary to nucleate and begin moving was less than 2.8 μs. Using a bidirectional magnetic field from a diametrically magnetized cylindrical magnet, a highly reproducible and controllable local twin configuration was created in a Ni-Mn-Ga element which is the fundamental pumping mechanism in the MSM micropump that has been co-invented and extensively characterized by the author.
Resumo:
Negotiating trade agreements is an important part of government trade policies, economic planning and part of the globally operating trading system of today. European Union and the United States have been active in the formation of trade agreements in global comparison. Now these two economic giants are engaged in negotiations to form their own trade agreement, the so called Transnational Trade and Investment Partnership (TTIP). The purpose of this thesis is to understand the reasons for making a trade agreement between two economic areas and understanding the issues it may include in the case of the TTIP. The TTIP has received a great deal of attention in the media. The opinions towards the partnership have been extreme, and the debate has been heated. The purpose of this study is to introduce the nature of the public discussion regarding the TTIP from Spring 2013 until 2014. The research problem is to find out what are the main issues in the agreement and what are the values influencing them. The study was conducted applying methods of critical discourse analysis to the chosen data. This includes gathering the issues from the data based on the attention each has received in the discussion. The underlying motives for raising different issues were analysed by investigating the authors’ position in the political, economic and social circuits. The perceived economic impacts of the TTIP are also under analysis with the same criteria. Some of the most respected economic newspapers globally were included in the research material as well as papers or reports published by the EU and global organisations. The analysis indicates a clear dichotomy of the attitudes towards the TTIP. Key problems include lack of transparency in the negotiations, the misunderstood investor-state dispute settlement, the constantly expanding regulatory issues and the risk of protectionism. The theory and data does suggest that the removal of tariffs is an effective tool for reaching economic gains in the TTIP and even more effective would be the reducing of non-tariff barriers, such as protectionism. Critics are worried over the rising influence of corporations over governments. The discourse analysis reveals that the supporters of the TTIP have values related to increasing welfare through economic growth. Critics do not deny the economic benefits but raise the question of inequality as a consequence. Overall they represent softer values such as sustainable development and democracy as a counter-attack to the corporate values of efficiency and the maximising of profits.
Resumo:
Sleep is important for the recovery of a critically ill patient, as lack of sleep is known to influence negatively a person’s cardiovascular system, mood, orientation, and metabolic and immune function and thus, it may prolong patients’ intensive care unit (ICU) and hospital stay. Intubated and mechanically ventilated patients suffer from fragmented and light sleep. However, it is not known well how non-intubated patients sleep. The evaluation of the patients’ sleep may be compromised by their fatigue and still position with no indication if they are asleep or not. The purpose of this study was to evaluate ICU patients’ sleep evaluation methods, the quality of non-intubated patients’ sleep, and the sleep evaluations performed by ICU nurses. The aims were to develop recommendations of patients’ sleep evaluation for ICU nurses and to provide a description of the quality of non-intubated patients’ sleep. The literature review of ICU patients’ sleep evaluation methods was extended to the end of 2014. The evaluation of the quality of patients’ sleep was conducted with four data: A) the nurses’ narrative documentations of the quality of patients’ sleep (n=114), B) the nurses’ sleep evaluations (n=21) with a structured observation instrument C) the patients’ self-evaluations (n=114) with the Richards-Campbell Sleep Questionnaire, and D) polysomnographic evaluations of the quality of patients’ sleep (n=21). The correspondence of data A with data C (collected 4–8/2011), and data B with data D (collected 5–8/2009) were analysed. Content analysis was used for the nurses’ documentations and statistical analyses for all the other data. The quality of non-intubated patients’ sleep varied between individuals. In many patients, sleep was light, awakenings were frequent, and the amount of sleep was insufficient as compared to sleep in healthy people. However, some patients were able to sleep well. The patients evaluated the quality of their sleep on average neither high nor low. Sleep depth was evaluated to be the worst and the speed of falling asleep the best aspect of sleep, on a scale 0 (poor sleep) to 100 (good sleep). Nursing care was mostly performed while the patients were awake, and thus the disturbing effect was low. The instruments available for nurses to evaluate the quality of patients’ sleep were limited and measured mainly the quantity of sleep. Nurses’ structured observatory evaluations of the quality of patients’ sleep were correct for approximately two thirds of the cases, and only regarding total sleep time. Nurses’ narrative documentations of the patients’ sleep corresponded with patients’ self-evaluations in just over half of the cases. However, nurses documented several dimensions of sleep that are not included in the present sleep evaluation instruments. They could be classified according to the components of the nursing process: needs assessment, sleep assessment, intervention, and effect of intervention. Valid, more comprehensive sleep evaluation methods for nurses are needed to evaluate, document, improve and study patients’ quality of sleep.
Resumo:
The DNA extraction is a critical step in Genetically Modified Organisms analysis based on real-time PCR. In this study, the CTAB and DNeasy methods provided good quality and quantity of DNA from the texturized soy protein, infant formula, and soy milk samples. Concerning the Certified Reference Material consisting of 5% Roundup Ready® soybean, neither method yielded DNA of good quality. However, the dilution test applied in the CTAB extracts showed no interference of inhibitory substances. The PCR efficiencies of lectin target amplification were not statistically different, and the coefficients of correlation (R²) demonstrated high degree of correlation between the copy numbers and the threshold cycle (Ct) values. ANOVA showed suitable adjustment of the regression and absence of significant linear deviations. The efficiencies of the p35S amplification were not statistically different, and all R² values using DNeasy extracts were above 0.98 with no significant linear deviations. Two out of three R² values using CTAB extracts were lower than 0.98, corresponding to lower degree of correlation, and the lack-of-fit test showed significant linear deviation in one run. The comparative analysis of the Ct values for the p35S and lectin targets demonstrated no statistical significant differences between the analytical curves of each target.
Resumo:
The aim of this Master’s thesis is to find a method for classifying spare part criticality in the case company. Several approaches exist for criticality classification of spare parts. The practical problem in this thesis is the lack of a generic analysis method for classifying spare parts of proprietary equipment of the case company. In order to find a classification method, a literature review of various analysis methods is required. The requirements of the case company also have to be recognized. This is achieved by consulting professionals in the company. The literature review states that the analytic hierarchy process (AHP) combined with decision tree models is a common method for classifying spare parts in academic literature. Most of the literature discusses spare part criticality in stock holding perspective. This is relevant perspective also for a customer orientated original equipment manufacturer (OEM), as the case company. A decision tree model is developed for classifying spare parts. The decision tree classifies spare parts into five criticality classes according to five criteria. The criteria are: safety risk, availability risk, functional criticality, predictability of failure and probability of failure. The criticality classes describe the level of criticality from non-critical to highly critical. The method is verified for classifying spare parts of a full deposit stripping machine. The classification can be utilized as a generic model for recognizing critical spare parts of other similar equipment, according to which spare part recommendations can be created. Purchase price of an item and equipment criticality were found to have no effect on spare part criticality in this context. Decision tree is recognized as the most suitable method for classifying spare part criticality in the company.
Resumo:
The use of certain perfonnance enhancing substances and methods has been defined as a major ethical breach by parties involved in the governance of highperfonnance sport. As a result, elite athletes worldwide are subject to rules and regulations set out in international and national anti-doping policies. Existing literature on the development of policies such as the World Anti-Doping Code and The Canadian antiDoping Program suggests a sport system in which athletes are rarely meaningfully involved in policy development (Houlihan, 2004a). Additionally, it is suggested that this lack of involvement is reflective of a similar lack of involvement in other areas of governance concerning athletes' lives. The purpose ofthis thesis is to examine the history and current state of athletes' involvement in the anti-doping policy process in Canada's high-perfonnance sport system. It includes discussion and analysis of recently conducted interviews with those involved in the policy process as well as an analysis of relevant documents, including anti-doping policies. The findings demonstrate that Canadian athletes have not been significantly involved in the creation of recently developed antidoping policies and that a re-evaluation of current policies is necessary to more fully recognize the reality of athletes' lives in Canada's high-perfonnance sport system and their rights within that system.
Resumo:
The intent in this study was to investigate in what ways teachers· beliefs about education and teaching are expressed in the specific teaching behaviours they employ, and whether teaching behaviours, as perceived by their students, are correlated with students· critical thinking and self-directed learning. To this end the relationships studied were: among faCUlty members· philosophy of teaching, locus of control orientation, psychological type, and observed teaching behaviour; and among students· psychological type, perceptions of teaching behaviour, self-directed learning readiness, and critical thinking. The overall purpose of the study was to investigate whether the implicit goals of higher education, critical thinking and self-direction, were actually accounted for in the university classroom. The research was set within the context of path-goal theory, adapted from the leadership literature. Within this framework, Mezirow·s work on transformative learning, including the influences of Habermas· writings, was integrated to develop a theoretical perspective upon which to base the research methodology. Both qualitative and quantitative methodologies were incorporated. Four faCUlty and a total of 142 students participated in the study. Philosophy of teaching was described through faCUlty interviews and completion of a repertory grid. Faculty completed a descriptive locus of control scale, and a psychological type test. Observations of their teaching behaviour were conducted. Students completed a Teaching Behaviour Assessment Scale, the Self-Directed Learning Readiness Scale, a psychological type test, and the Watson-Glaser Critical Thinking Appraisal. A small sample of students were interviewed. Follow-up discussions with faculty were used to validate the interview, observation, teaching behaviour, and repertory grid data. Results indicated that some discrepancies existed between faculty's espoused philosophy of teaching and their observed teaching behaviour. Instructors' teaching behaviour, however, was a function of their personal theory of practice. Relationships were found between perceived teaching behaviour and students· self-directed learning and critical thinking, but these varied across situations, as would be predicted from path-goal theory. Psychological type of students and instructor also accounted for some of the variability in the relationships studied. Student psychological type could be shown as a partial predictor of self-directed learning readiness. The results were discussed in terms of theory development and implications for further research and practice.
Resumo:
Youth are critical partners in health promotion, but the process of training young people to become meaningfully involved is challenging. This mixed-methods evaluation considered the impact of a leadership camp in preparing 42 grade seven students to become peer health leaders in a ‘heart health’ initiative. The experiences of participants and their sense of agency were explored. Data were collected from pre and post camp surveys, focus groups, student journals and researcher observations. Findings indicate that relationships with peers and adults were key to agency development, and participants appeared to broaden their perspectives on the meanings of ‘health’ and ‘leadership.’ Significant changes on two sub-scales of the Harter Perceived Competence Scale for Children were also found. Suggestions for practice and further research are provided.
Resumo:
La survie des réseaux est un domaine d'étude technique très intéressant ainsi qu'une préoccupation critique dans la conception des réseaux. Compte tenu du fait que de plus en plus de données sont transportées à travers des réseaux de communication, une simple panne peut interrompre des millions d'utilisateurs et engendrer des millions de dollars de pertes de revenu. Les techniques de protection des réseaux consistent à fournir une capacité supplémentaire dans un réseau et à réacheminer les flux automatiquement autour de la panne en utilisant cette disponibilité de capacité. Cette thèse porte sur la conception de réseaux optiques intégrant des techniques de survie qui utilisent des schémas de protection basés sur les p-cycles. Plus précisément, les p-cycles de protection par chemin sont exploités dans le contexte de pannes sur les liens. Notre étude se concentre sur la mise en place de structures de protection par p-cycles, et ce, en supposant que les chemins d'opération pour l'ensemble des requêtes sont définis a priori. La majorité des travaux existants utilisent des heuristiques ou des méthodes de résolution ayant de la difficulté à résoudre des instances de grande taille. L'objectif de cette thèse est double. D'une part, nous proposons des modèles et des méthodes de résolution capables d'aborder des problèmes de plus grande taille que ceux déjà présentés dans la littérature. D'autre part, grâce aux nouveaux algorithmes, nous sommes en mesure de produire des solutions optimales ou quasi-optimales. Pour ce faire, nous nous appuyons sur la technique de génération de colonnes, celle-ci étant adéquate pour résoudre des problèmes de programmation linéaire de grande taille. Dans ce projet, la génération de colonnes est utilisée comme une façon intelligente d'énumérer implicitement des cycles prometteurs. Nous proposons d'abord des formulations pour le problème maître et le problème auxiliaire ainsi qu'un premier algorithme de génération de colonnes pour la conception de réseaux protegées par des p-cycles de la protection par chemin. L'algorithme obtient de meilleures solutions, dans un temps raisonnable, que celles obtenues par les méthodes existantes. Par la suite, une formulation plus compacte est proposée pour le problème auxiliaire. De plus, nous présentons une nouvelle méthode de décomposition hiérarchique qui apporte une grande amélioration de l'efficacité globale de l'algorithme. En ce qui concerne les solutions en nombres entiers, nous proposons deux méthodes heurisiques qui arrivent à trouver des bonnes solutions. Nous nous attardons aussi à une comparaison systématique entre les p-cycles et les schémas classiques de protection partagée. Nous effectuons donc une comparaison précise en utilisant des formulations unifiées et basées sur la génération de colonnes pour obtenir des résultats de bonne qualité. Par la suite, nous évaluons empiriquement les versions orientée et non-orientée des p-cycles pour la protection par lien ainsi que pour la protection par chemin, dans des scénarios de trafic asymétrique. Nous montrons quel est le coût de protection additionnel engendré lorsque des systèmes bidirectionnels sont employés dans de tels scénarios. Finalement, nous étudions une formulation de génération de colonnes pour la conception de réseaux avec des p-cycles en présence d'exigences de disponibilité et nous obtenons des premières bornes inférieures pour ce problème.
Resumo:
Avec les nouvelles technologies des réseaux optiques, une quantité de données de plus en plus grande peut être transportée par une seule longueur d'onde. Cette quantité peut atteindre jusqu’à 40 gigabits par seconde (Gbps). Les flots de données individuels quant à eux demandent beaucoup moins de bande passante. Le groupage de trafic est une technique qui permet l'utilisation efficace de la bande passante offerte par une longueur d'onde. Elle consiste à assembler plusieurs flots de données de bas débit en une seule entité de données qui peut être transporté sur une longueur d'onde. La technique demultiplexage en longueurs d'onde (Wavelength Division Multiplexing WDM) permet de transporter plusieurs longueurs d'onde sur une même fibre. L'utilisation des deux techniques : WDM et groupage de trafic, permet de transporter une quantité de données de l'ordre de terabits par seconde (Tbps) sur une même fibre optique. La protection du trafic dans les réseaux optiques devient alors une opération très vitale pour ces réseaux, puisqu'une seule panne peut perturber des milliers d'utilisateurs et engendre des pertes importantes jusqu'à plusieurs millions de dollars à l'opérateur et aux utilisateurs du réseau. La technique de protection consiste à réserver une capacité supplémentaire pour acheminer le trafic en cas de panne dans le réseau. Cette thèse porte sur l'étude des techniques de groupage et de protection du trafic en utilisant les p-cycles dans les réseaux optiques dans un contexte de trafic dynamique. La majorité des travaux existants considère un trafic statique où l'état du réseau ainsi que le trafic sont donnés au début et ne changent pas. En plus, la majorité de ces travaux utilise des heuristiques ou des méthodes ayant de la difficulté à résoudre des instances de grande taille. Dans le contexte de trafic dynamique, deux difficultés majeures s'ajoutent aux problèmes étudiés, à cause du changement continuel du trafic dans le réseau. La première est due au fait que la solution proposée à la période précédente, même si elle est optimisée, n'est plus nécessairement optimisée ou optimale pour la période courante, une nouvelle optimisation de la solution au problème est alors nécessaire. La deuxième difficulté est due au fait que la résolution du problème pour une période donnée est différente de sa résolution pour la période initiale à cause des connexions en cours dans le réseau qui ne doivent pas être trop dérangées à chaque période de temps. L'étude faite sur la technique de groupage de trafic dans un contexte de trafic dynamique consiste à proposer différents scénarios pour composer avec ce type de trafic, avec comme objectif la maximisation de la bande passante des connexions acceptées à chaque période de temps. Des formulations mathématiques des différents scénarios considérés pour le problème de groupage sont proposées. Les travaux que nous avons réalisés sur le problème de la protection considèrent deux types de p-cycles, ceux protégeant les liens (p-cycles de base) et les FIPP p-cycles (p-cycles protégeant les chemins). Ces travaux ont consisté d’abord en la proposition de différents scénarios pour gérer les p-cycles de protection dans un contexte de trafic dynamique. Ensuite, une étude sur la stabilité des p-cycles dans un contexte de trafic dynamique a été faite. Des formulations de différents scénarios ont été proposées et les méthodes de résolution utilisées permettent d’aborder des problèmes de plus grande taille que ceux présentés dans la littérature. Nous nous appuyons sur la méthode de génération de colonnes pour énumérer implicitement les cycles les plus prometteurs. Dans l'étude des p-cycles protégeant les chemins ou FIPP p-cycles, nous avons proposé des formulations pour le problème maître et le problème auxiliaire. Nous avons utilisé une méthode de décomposition hiérarchique du problème qui nous permet d'obtenir de meilleurs résultats dans un temps raisonnable. Comme pour les p-cycles de base, nous avons étudié la stabilité des FIPP p-cycles dans un contexte de trafic dynamique. Les travaux montrent que dépendamment du critère d'optimisation, les p-cycles de base (protégeant les liens) et les FIPP p-cycles (protégeant les chemins) peuvent être très stables.
Resumo:
Le Coran et la Sunna (la tradition du prophète Muḥammad) relatée dans les aḥâdîth (les traditions orales du Prophète) représentent la source éternelle d’inspiration et de savoir à laquelle les Musulmans se réfèrent pour agir, réagir et interagir. Par le fait même, tout au long de l’histoire musulmane, ces sources sacrées ont été à la base des relations des Musulmans avec autrui, incluant les Chrétiens. Les trois éléments majeurs de différenciation entre l’islam et le christianisme sont : la nature divine de Jésus, la trinité ainsi que la crucifixion et la mort de Jésus sur la croix. La position tranchée du Coran quant aux deux premiers points ne laisse place à aucun débat académique. Cependant, l’ambiguïté du texte coranique quant à la crucifixion de Jésus et sa mort a favorisé de nombreux débats entre mufassirûn (les exégètes du Coran). Cette thèse est une analyse textuelle des deux passages coraniques qui traitent de cette troisième différence. Pour cette étude textuelle et intertextuelle, les tafâsîr (interprétations du Coran) de huit mufassirûn appartenant à différentes madhâhib (écoles d’interprétation) et périodes de l’histoire des relations musulmanes-chrétiennes sont utilisés en combinaison avec certaines approches et méthodes récentes telles que : historico-critique et critique rédactionnelle. De plus, trois nouvelles théories développées dans la thèse enrichissent les outils herméneutiques de la recherche : la « théorie des cinq couches de sens », la « théorie des messages coraniques doubles » et la « théorie de la nature humaine tripartite ». À la lumière de ces théories et méthodes, il apparaît que l’ambiguïté coranique au sujet de la crucifixion et de la mort de Jésus est une invitation claire de la part du Coran incitant les Musulmans et les Chrétiens à vivre avec cette ambiguïté insoluble. La conclusion de cette thèse contribue directement à de meilleures relations musulmanes-chrétiennes, renforçant l’appel coranique (Coran 3:64, 103) à ces deux communautés leurs demandant de se cramponner aux points communs majeurs, d’intégrer leurs différences mineures et de consacrer leurs énergies pour une vie harmonieuse entre eux et laisser le reste dans les mains du Dieu qu’ils ont en commun.
Resumo:
Affirmer que les citoyens des démocraties occidentales sont l’objet d’une surveillance systématique efficace et à grande échelle a de quoi provoquer une réaction incrédule. Démagogie, diront certains. Pourtant, les progrès réalisés dans les technologies de collecte, de traitement et de stockage d’information forcent une réflexion sur cette hypothèse. Il a été souligné justement que les coûts élevés liés aux moyens rudimentaires employés par les polices secrètes d’antan endiguaient en quelque sorte la menace. Les filatures, les infiltrations, les rapts nocturnes de dissidents pêchaient par manque de subtilité. Au contraire, le génie des techniques modernes vient de ce qu’elles n’entravent pas le quotidien des gens. Mais au-delà du raffinement technique, le contrôle panoptique de la masse atteint un sommet d’efficience dès lors que celle-ci est amenée à y consentir. Comme le faisait remarquer le professeur Raab : « [TRADUCTION] La surveillance prospère naturellement dans les régimes autoritaires qui ne s’exposent pas au débat public ni à la critique. Lorsqu’elle est utilisée dans des régimes dits démocratiques, elle est légitimée et circonscrite par des arguments de nécessité ou de justifications spéciales, tout comme la censure »[1]. Or, le droit, en tant que discours de rationalité, accomplit savamment ce travail de légitimation. C’est dans cet esprit qu’une analyse radicale des règles de droit encadrant le droit à la vie privée apporte une lucidité nouvelle sur notre faux sentiment de sécurité.