989 resultados para Dynamic Operation
Resumo:
Sustainable resource use is one of the most important environmental issues of our times. It is closely related to discussions on the 'peaking' of various natural resources serving as energy sources, agricultural nutrients, or metals indispensable in high-technology applications. Although the peaking theory remains controversial, it is commonly recognized that a more sustainable use of resources would alleviate negative environmental impacts related to resource use. In this thesis, sustainable resource use is analysed from a practical standpoint, through several different case studies. Four of these case studies relate to resource metabolism in the Canton of Geneva in Switzerland: the aim was to model the evolution of chosen resource stocks and flows in the coming decades. The studied resources were copper (a bulk metal), phosphorus (a vital agricultural nutrient), and wood (a renewable resource). In addition, the case of lithium (a critical metal) was analysed briefly in a qualitative manner and in an electric mobility perspective. In addition to the Geneva case studies, this thesis includes a case study on the sustainability of space life support systems. Space life support systems are systems whose aim is to provide the crew of a spacecraft with the necessary metabolic consumables over the course of a mission. Sustainability was again analysed from a resource use perspective. In this case study, the functioning of two different types of life support systems, ARES and BIORAT, were evaluated and compared; these systems represent, respectively, physico-chemical and biological life support systems. Space life support systems could in fact be used as a kind of 'laboratory of sustainability' given that they represent closed and relatively simple systems compared to complex and open terrestrial systems such as the Canton of Geneva. The chosen analysis method used in the Geneva case studies was dynamic material flow analysis: dynamic material flow models were constructed for the resources copper, phosphorus, and wood. Besides a baseline scenario, various alternative scenarios (notably involving increased recycling) were also examined. In the case of space life support systems, the methodology of material flow analysis was also employed, but as the data available on the dynamic behaviour of the systems was insufficient, only static simulations could be performed. The results of the case studies in the Canton of Geneva show the following: were resource use to follow population growth, resource consumption would be multiplied by nearly 1.2 by 2030 and by 1.5 by 2080. A complete transition to electric mobility would be expected to only slightly (+5%) increase the copper consumption per capita while the lithium demand in cars would increase 350 fold. For example, phosphorus imports could be decreased by recycling sewage sludge or human urine; however, the health and environmental impacts of these options have yet to be studied. Increasing the wood production in the Canton would not significantly decrease the dependence on wood imports as the Canton's production represents only 5% of total consumption. In the comparison of space life support systems ARES and BIORAT, BIORAT outperforms ARES in resource use but not in energy use. However, as the systems are dimensioned very differently, it remains questionable whether they can be compared outright. In conclusion, the use of dynamic material flow analysis can provide useful information for policy makers and strategic decision-making; however, uncertainty in reference data greatly influences the precision of the results. Space life support systems constitute an extreme case of resource-using systems; nevertheless, it is not clear how their example could be of immediate use to terrestrial systems.
Resumo:
Efforts to improve safety and traffic flow through merge areas on high volume/high speed roadways have included early merge and late merge concepts and several studies of the effectiveness of these concepts, many using Intelligent Transportation Systems for implementation. The Iowa Department of Transportation (Iowa DOT) planned to employ a system of dynamic message signs (DMS) to enhance standard temporary traffic control for lane closures and traffic merges at two bridge construction projects in western Iowa (Adair County and Cass County counties) on I-80 during the 2008 construction season. To evaluate the DMS system’s effectiveness for impacting driver merging actions, the Iowa DOT contracted with Iowa State University’s Center for Transportation Research and Education to perform the evaluation and make recommendations for future use of this system based on the results. Data were collected over four weekends, beginning August 1–4 and ending October 16–20, 2008. Two weekends yielded sufficient data for evaluation, one of transition traffic flow and the other with a period of congestion. For both of these periods, a statistical review of collected data did not indicate a significant impact on driver merging actions when the DMS messaging was activated as compared to free flow conditions with no messaging. Collection of relevant project data proved to be problematic for several reasons. In addition to personnel safety issues associated with the placement and retrieval of counting devices on a high speed roadway, unsatisfactory equipment performance and insufficient congestion to activate the DMS messaging hampered efforts. A review of the data that was collected revealed different results taken by the tube counters compared to the older model plate counters. Although variations were not significant from a practical standpoint, a statistical evaluation showed that the data, including volumes, speeds, and classifications from the two sources were not comparable at a 95% level of confidence. Comparison of data from the Iowa DOT’s automated traffic recorders (ATRs) in the area also suggested variations in results from these data collection systems. Additional comparison studies were recommended.
Resumo:
A single coronary artery can complicate the surgical technique of arterial switch operations, impairing early and late outcomes. We propose a new surgical approach, successfully applied in a 2.1 kg neonate, aimed at reducing the risk of early and late compression and/or distortion of the newly constructed coronary artery system.
Resumo:
EXECUTIVE SUMMARY : Evaluating Information Security Posture within an organization is becoming a very complex task. Currently, the evaluation and assessment of Information Security are commonly performed using frameworks, methodologies and standards which often consider the various aspects of security independently. Unfortunately this is ineffective because it does not take into consideration the necessity of having a global and systemic multidimensional approach to Information Security evaluation. At the same time the overall security level is globally considered to be only as strong as its weakest link. This thesis proposes a model aiming to holistically assess all dimensions of security in order to minimize the likelihood that a given threat will exploit the weakest link. A formalized structure taking into account all security elements is presented; this is based on a methodological evaluation framework in which Information Security is evaluated from a global perspective. This dissertation is divided into three parts. Part One: Information Security Evaluation issues consists of four chapters. Chapter 1 is an introduction to the purpose of this research purpose and the Model that will be proposed. In this chapter we raise some questions with respect to "traditional evaluation methods" as well as identifying the principal elements to be addressed in this direction. Then we introduce the baseline attributes of our model and set out the expected result of evaluations according to our model. Chapter 2 is focused on the definition of Information Security to be used as a reference point for our evaluation model. The inherent concepts of the contents of a holistic and baseline Information Security Program are defined. Based on this, the most common roots-of-trust in Information Security are identified. Chapter 3 focuses on an analysis of the difference and the relationship between the concepts of Information Risk and Security Management. Comparing these two concepts allows us to identify the most relevant elements to be included within our evaluation model, while clearing situating these two notions within a defined framework is of the utmost importance for the results that will be obtained from the evaluation process. Chapter 4 sets out our evaluation model and the way it addresses issues relating to the evaluation of Information Security. Within this Chapter the underlying concepts of assurance and trust are discussed. Based on these two concepts, the structure of the model is developed in order to provide an assurance related platform as well as three evaluation attributes: "assurance structure", "quality issues", and "requirements achievement". Issues relating to each of these evaluation attributes are analysed with reference to sources such as methodologies, standards and published research papers. Then the operation of the model is discussed. Assurance levels, quality levels and maturity levels are defined in order to perform the evaluation according to the model. Part Two: Implementation of the Information Security Assurance Assessment Model (ISAAM) according to the Information Security Domains consists of four chapters. This is the section where our evaluation model is put into a welldefined context with respect to the four pre-defined Information Security dimensions: the Organizational dimension, Functional dimension, Human dimension, and Legal dimension. Each Information Security dimension is discussed in a separate chapter. For each dimension, the following two-phase evaluation path is followed. The first phase concerns the identification of the elements which will constitute the basis of the evaluation: ? Identification of the key elements within the dimension; ? Identification of the Focus Areas for each dimension, consisting of the security issues identified for each dimension; ? Identification of the Specific Factors for each dimension, consisting of the security measures or control addressing the security issues identified for each dimension. The second phase concerns the evaluation of each Information Security dimension by: ? The implementation of the evaluation model, based on the elements identified for each dimension within the first phase, by identifying the security tasks, processes, procedures, and actions that should have been performed by the organization to reach the desired level of protection; ? The maturity model for each dimension as a basis for reliance on security. For each dimension we propose a generic maturity model that could be used by every organization in order to define its own security requirements. Part three of this dissertation contains the Final Remarks, Supporting Resources and Annexes. With reference to the objectives of our thesis, the Final Remarks briefly analyse whether these objectives were achieved and suggest directions for future related research. Supporting resources comprise the bibliographic resources that were used to elaborate and justify our approach. Annexes include all the relevant topics identified within the literature to illustrate certain aspects of our approach. Our Information Security evaluation model is based on and integrates different Information Security best practices, standards, methodologies and research expertise which can be combined in order to define an reliable categorization of Information Security. After the definition of terms and requirements, an evaluation process should be performed in order to obtain evidence that the Information Security within the organization in question is adequately managed. We have specifically integrated into our model the most useful elements of these sources of information in order to provide a generic model able to be implemented in all kinds of organizations. The value added by our evaluation model is that it is easy to implement and operate and answers concrete needs in terms of reliance upon an efficient and dynamic evaluation tool through a coherent evaluation system. On that basis, our model could be implemented internally within organizations, allowing them to govern better their Information Security. RÉSUMÉ : Contexte général de la thèse L'évaluation de la sécurité en général, et plus particulièrement, celle de la sécurité de l'information, est devenue pour les organisations non seulement une mission cruciale à réaliser, mais aussi de plus en plus complexe. A l'heure actuelle, cette évaluation se base principalement sur des méthodologies, des bonnes pratiques, des normes ou des standards qui appréhendent séparément les différents aspects qui composent la sécurité de l'information. Nous pensons que cette manière d'évaluer la sécurité est inefficiente, car elle ne tient pas compte de l'interaction des différentes dimensions et composantes de la sécurité entre elles, bien qu'il soit admis depuis longtemps que le niveau de sécurité globale d'une organisation est toujours celui du maillon le plus faible de la chaîne sécuritaire. Nous avons identifié le besoin d'une approche globale, intégrée, systémique et multidimensionnelle de l'évaluation de la sécurité de l'information. En effet, et c'est le point de départ de notre thèse, nous démontrons que seule une prise en compte globale de la sécurité permettra de répondre aux exigences de sécurité optimale ainsi qu'aux besoins de protection spécifiques d'une organisation. Ainsi, notre thèse propose un nouveau paradigme d'évaluation de la sécurité afin de satisfaire aux besoins d'efficacité et d'efficience d'une organisation donnée. Nous proposons alors un modèle qui vise à évaluer d'une manière holistique toutes les dimensions de la sécurité, afin de minimiser la probabilité qu'une menace potentielle puisse exploiter des vulnérabilités et engendrer des dommages directs ou indirects. Ce modèle se base sur une structure formalisée qui prend en compte tous les éléments d'un système ou programme de sécurité. Ainsi, nous proposons un cadre méthodologique d'évaluation qui considère la sécurité de l'information à partir d'une perspective globale. Structure de la thèse et thèmes abordés Notre document est structuré en trois parties. La première intitulée : « La problématique de l'évaluation de la sécurité de l'information » est composée de quatre chapitres. Le chapitre 1 introduit l'objet de la recherche ainsi que les concepts de base du modèle d'évaluation proposé. La maniéré traditionnelle de l'évaluation de la sécurité fait l'objet d'une analyse critique pour identifier les éléments principaux et invariants à prendre en compte dans notre approche holistique. Les éléments de base de notre modèle d'évaluation ainsi que son fonctionnement attendu sont ensuite présentés pour pouvoir tracer les résultats attendus de ce modèle. Le chapitre 2 se focalise sur la définition de la notion de Sécurité de l'Information. Il ne s'agit pas d'une redéfinition de la notion de la sécurité, mais d'une mise en perspectives des dimensions, critères, indicateurs à utiliser comme base de référence, afin de déterminer l'objet de l'évaluation qui sera utilisé tout au long de notre travail. Les concepts inhérents de ce qui constitue le caractère holistique de la sécurité ainsi que les éléments constitutifs d'un niveau de référence de sécurité sont définis en conséquence. Ceci permet d'identifier ceux que nous avons dénommés « les racines de confiance ». Le chapitre 3 présente et analyse la différence et les relations qui existent entre les processus de la Gestion des Risques et de la Gestion de la Sécurité, afin d'identifier les éléments constitutifs du cadre de protection à inclure dans notre modèle d'évaluation. Le chapitre 4 est consacré à la présentation de notre modèle d'évaluation Information Security Assurance Assessment Model (ISAAM) et la manière dont il répond aux exigences de l'évaluation telle que nous les avons préalablement présentées. Dans ce chapitre les concepts sous-jacents relatifs aux notions d'assurance et de confiance sont analysés. En se basant sur ces deux concepts, la structure du modèle d'évaluation est développée pour obtenir une plateforme qui offre un certain niveau de garantie en s'appuyant sur trois attributs d'évaluation, à savoir : « la structure de confiance », « la qualité du processus », et « la réalisation des exigences et des objectifs ». Les problématiques liées à chacun de ces attributs d'évaluation sont analysées en se basant sur l'état de l'art de la recherche et de la littérature, sur les différentes méthodes existantes ainsi que sur les normes et les standards les plus courants dans le domaine de la sécurité. Sur cette base, trois différents niveaux d'évaluation sont construits, à savoir : le niveau d'assurance, le niveau de qualité et le niveau de maturité qui constituent la base de l'évaluation de l'état global de la sécurité d'une organisation. La deuxième partie: « L'application du Modèle d'évaluation de l'assurance de la sécurité de l'information par domaine de sécurité » est elle aussi composée de quatre chapitres. Le modèle d'évaluation déjà construit et analysé est, dans cette partie, mis dans un contexte spécifique selon les quatre dimensions prédéfinies de sécurité qui sont: la dimension Organisationnelle, la dimension Fonctionnelle, la dimension Humaine, et la dimension Légale. Chacune de ces dimensions et son évaluation spécifique fait l'objet d'un chapitre distinct. Pour chacune des dimensions, une évaluation en deux phases est construite comme suit. La première phase concerne l'identification des éléments qui constituent la base de l'évaluation: ? Identification des éléments clés de l'évaluation ; ? Identification des « Focus Area » pour chaque dimension qui représentent les problématiques se trouvant dans la dimension ; ? Identification des « Specific Factors » pour chaque Focus Area qui représentent les mesures de sécurité et de contrôle qui contribuent à résoudre ou à diminuer les impacts des risques. La deuxième phase concerne l'évaluation de chaque dimension précédemment présentées. Elle est constituée d'une part, de l'implémentation du modèle général d'évaluation à la dimension concernée en : ? Se basant sur les éléments spécifiés lors de la première phase ; ? Identifiant les taches sécuritaires spécifiques, les processus, les procédures qui auraient dû être effectués pour atteindre le niveau de protection souhaité. D'autre part, l'évaluation de chaque dimension est complétée par la proposition d'un modèle de maturité spécifique à chaque dimension, qui est à considérer comme une base de référence pour le niveau global de sécurité. Pour chaque dimension nous proposons un modèle de maturité générique qui peut être utilisé par chaque organisation, afin de spécifier ses propres exigences en matière de sécurité. Cela constitue une innovation dans le domaine de l'évaluation, que nous justifions pour chaque dimension et dont nous mettons systématiquement en avant la plus value apportée. La troisième partie de notre document est relative à la validation globale de notre proposition et contient en guise de conclusion, une mise en perspective critique de notre travail et des remarques finales. Cette dernière partie est complétée par une bibliographie et des annexes. Notre modèle d'évaluation de la sécurité intègre et se base sur de nombreuses sources d'expertise, telles que les bonnes pratiques, les normes, les standards, les méthodes et l'expertise de la recherche scientifique du domaine. Notre proposition constructive répond à un véritable problème non encore résolu, auquel doivent faire face toutes les organisations, indépendamment de la taille et du profil. Cela permettrait à ces dernières de spécifier leurs exigences particulières en matière du niveau de sécurité à satisfaire, d'instancier un processus d'évaluation spécifique à leurs besoins afin qu'elles puissent s'assurer que leur sécurité de l'information soit gérée d'une manière appropriée, offrant ainsi un certain niveau de confiance dans le degré de protection fourni. Nous avons intégré dans notre modèle le meilleur du savoir faire, de l'expérience et de l'expertise disponible actuellement au niveau international, dans le but de fournir un modèle d'évaluation simple, générique et applicable à un grand nombre d'organisations publiques ou privées. La valeur ajoutée de notre modèle d'évaluation réside précisément dans le fait qu'il est suffisamment générique et facile à implémenter tout en apportant des réponses sur les besoins concrets des organisations. Ainsi notre proposition constitue un outil d'évaluation fiable, efficient et dynamique découlant d'une approche d'évaluation cohérente. De ce fait, notre système d'évaluation peut être implémenté à l'interne par l'entreprise elle-même, sans recourir à des ressources supplémentaires et lui donne également ainsi la possibilité de mieux gouverner sa sécurité de l'information.
Resumo:
Emotion regulation is crucial for successfully engaging in social interactions. Yet, little is known about the neural mechanisms controlling behavioral responses to emotional expressions perceived in the face of other people, which constitute a key element of interpersonal communication. Here, we investigated brain systems involved in social emotion perception and regulation, using functional magnetic resonance imaging (fMRI) in 20 healthy participants. The latter saw dynamic facial expressions of either happiness or sadness, and were asked to either imitate the expression or to suppress any expression on their own face (in addition to a gender judgment control task). fMRI results revealed higher activity in regions associated with emotion (e.g., the insula), motor function (e.g., motor cortex), and theory of mind (e.g., [pre]cuneus) during imitation. Activity in dorsal cingulate cortex was also increased during imitation, possibly reflecting greater action monitoring or conflict with own feeling states. In addition, premotor regions were more strongly activated during both imitation and suppression, suggesting a recruitment of motor control for both the production and inhibition of emotion expressions. Expressive suppression (eSUP) produced increases in dorsolateral and lateral prefrontal cortex typically related to cognitive control. These results suggest that voluntary imitation and eSUP modulate brain responses to emotional signals perceived from faces, by up- and down-regulating activity in distributed subcortical and cortical networks that are particularly involved in emotion, action monitoring, and cognitive control.
Resumo:
Background: We have recently shown that the median diagnostic delay to establish Crohn's disease (CD) diagnosis in the Swiss IBD Cohort (SIBDC) was 9 months. Seventy five percent of all CD patients were diagnosed within 24 months. The clinical impact of a long diagnostic delay on the natural history of CD is unknown. Aim: To compare the frequency and type of CD-related complications in the patient groups with long diagnostic delay (>24 months) vs. the ones diagnosed within 24 months. Methods: Retrospective analysis of data from the SIBDCS, comprising a large sample of CD patients followed in hospitals and private practices across Switzerland. Results: Two hundred CD patients (121 female, mean age 44.9 ± 15.0 years, 38% smokers, 71% ever treated with immunomodulators and 35% with anti-TNF) with long diagnostic delay were compared to 697 CD patients (358 female, mean age 39.1 ± 14.9 years, 33% smokers, 74% ever treated with immunomodulators and 33% with anti-TNF) diagnosed within 24 months. No differences in the outcomes were observed between the two patient groups within year one after CD diagnosis. Among those diagnosed 2-5 years ago, CD patients with long diagnostic delay (n = 45) presented more frequently with internal fistulas (11.1% vs. 3.1%, p = 0.03) and bowel stenoses (28.9% vs. 15.7%, p = 0.05), and they more frequently underwent CD-related operations (15.6% vs. 5.0%, p = 0.02) compared to the patients diagnosed within 24 months (n = 159). Among those diagnosed 6-10 years ago, CD patients with long diagnostic delay (n = 48) presented more frequently with extraintestinal manifestations (60.4% vs. 34.6%, p = 0.001) than those diagnosed within 24 months (n = 182). For the patients diagnosed 11-15 years ago, no differences in outcomes were found between the long diagnostic delay group (n = 106) and the one diagnosed within 24 months (n = 32). Among those diagnosed >= 16 years ago, the group with long diagnostic delay (n = 71) more frequently underwent CD-related operations (63.4% vs. 46.5%, p = 0.01) compared to the group diagnosed with CD within 24 months (n = 241). Conclusions: A long diagnostic delay in CD patients is associated with a more complicated disease course and higher number of CD-related operations in the years following the diagnosis. Our results indicate that efforts should be undertaken to shorten the diagnostic delay in CD patients in order to reduce the risk for progression towards a complicated disease phenotype.
Resumo:
An experiment was carried out to examine the impact on electrodermal activity of people when approached by groups of one or four virtual characters at varying distances. It was premised on the basis of proxemics theory that the closer the approach of the virtual characters to the participant, the greater the level of physiological arousal. Physiological arousal was measured by the number of skin conductance responses within a short time period after the approach, and the maximum change in skin conductance level 5 s after the approach. The virtual characters were each either female or a cylinder of human size, and one or four characters approached each subject a total of 12 times. Twelve male subjects were recruited for the experiment. The results suggest that the number of skin conductance responses after the approach and the change in skin conductance level increased the closer the virtual characters approached toward the participants. Moreover, these response variables were inversely correlated with the number of visits, showing a typical adaptation effect. There was some evidence to suggest that the number of characters who simultaneously approached (one or four) was positively associated with the responses. Surprisingly there was no evidence of a difference in response between the humanoid characters and cylinders on the basis of this physiological data. It is suggested that the similarity in this quantitative arousal response to virtual characters and virtual objects might mask a profound difference in qualitative response, an interpretation supported by questionnaire and interview results. Overall the experiment supported the premise that people exhibit heightened physiological arousal the closer they are approached by virtual characters.
Resumo:
An experiment was carried out to examine the impact on electrodermal activity of people when approached by groups of one or four virtual characters at varying distances. It was premised on the basis of proxemics theory that the closer the approach of the virtual characters to the participant, the greater the level of physiological arousal. Physiological arousal was measured by the number of skin conductance responses within a short time period after the approach, and the maximum change in skin conductance level 5 s after the approach. The virtual characters were each either female or a cylinder of human size, and one or four characters approached each subject a total of 12 times. Twelve male subjects were recruited for the experiment. The results suggest that the number of skin conductance responses after the approach and the change in skin conductance level increased the closer the virtual characters approached toward the participants. Moreover, these response variables were inversely correlated with the number of visits, showing a typical adaptation effect. There was some evidence to suggest that the number of characters who simultaneously approached (one or four) was positively associated with the responses. Surprisingly there was no evidence of a difference in response between the humanoid characters and cylinders on the basis of this physiological data. It is suggested that the similarity in this quantitative arousal response to virtual characters and virtual objects might mask a profound difference in qualitative response, an interpretation supported by questionnaire and interview results. Overall the experiment supported the premise that people exhibit heightened physiological arousal the closer they are approached by virtual characters.
Resumo:
Despite the important benefits for firms of commercial initiatives on the Internet, e-commerce is still an emerging distribution channel, even in developed countries. Thus, more needs to be known about the mechanisms affecting its development. A large number of works have studied firms¿ e-commerce adoption from technological, intraorganizational, institutional, or other specific perspectives, but there is a need for adequately tested integrative frameworks. Hence, this work proposes and tests a model of firms¿ business-to-consumer (called B2C) e-commerce adoption that is founded on a holistic vision of the phenomenon. With this integrative approach, the authors analyze the joint influence of environmental, technological, and organizational factors; moreover, they evaluate this effect over time. Using various representative Spanish data sets covering the period 1996-2005, the findings demonstrate the suitability of the holistic framework. Likewise, some lessons are learned from the analysis of the key building blocks. In particular, the current study provides evidence for the debate about the effect of competitive pressure, since the findings show that competitive pressure disincentivizes e-commerce adoption in the long term. The results also show that the development or enrichment of the consumers¿ consumption patterns, the technological readiness of the market forces, the firm¿s global scope, and its competences in innovation continuously favor e-commerce adoption.
Resumo:
In this paper we propose a method for computing JPEG quantization matrices for a given mean square error or PSNR. Then, we employ our method to compute JPEG standard progressive operation mode definition scripts using a quantization approach. Therefore, it is no longer necessary to use a trial and error procedure to obtain a desired PSNR and/or definition script, reducing cost. Firstly, we establish a relationship between a Laplacian source and its uniform quantization error. We apply this model to the coefficients obtained in the discrete cosine transform stage of the JPEG standard. Then, an image may be compressed using the JPEG standard under a global MSE (or PSNR) constraint and a set of local constraints determined by the JPEG standard and visual criteria. Secondly, we study the JPEG standard progressive operation mode from a quantization based approach. A relationship between the measured image quality at a given stage of the coding process and a quantization matrix is found. Thus, the definition script construction problem can be reduced to a quantization problem. Simulations show that our method generates better quantization matrices than the classical method based on scaling the JPEG default quantization matrix. The estimation of PSNR has usually an error smaller than 1 dB. This figure decreases for high PSNR values. Definition scripts may be generated avoiding an excessive number of stages and removing small stages that do not contribute during the decoding process with a noticeable image quality improvement.