862 resultados para Integrated user model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

J/psi photoproduction is studied in the framework of the analytic S-matrix theory. The differential and integrated elastic cross sections for J/psi photoproduction are calculated from a dual amplitude with Mandelstam analyticity. It is argued that, at low energies, the background, which is the low-energy equivalent of the high-energy diffraction, replaces the Pomeron exchange. The onset of the high-energy Pomeron dominance is estimated from the fits to the data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Exclusive J/Psi electroproduction is studied in the framework of the analytic S-matrix theory. The differential and integrated elastic cross sections are calculated using the modified dual amplitude with Mandelstam analyticity model. The model is applied to the description of the available experimental data and proves to be valid in a wide region of the kinematical variables s, t, and Q(2). Our amplitude can be used also as a universal background parametrization for the extraction of tiny resonance signals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Because of the increase in workplace automation and the diversification of industrial processes, workplaces have become more and more complex. The classical approaches used to address workplace hazard concerns, such as checklists or sequence models, are, therefore, of limited use in such complex systems. Moreover, because of the multifaceted nature of workplaces, the use of single-oriented methods, such as AEA (man oriented), FMEA (system oriented), or HAZOP (process oriented), is not satisfactory. The use of a dynamic modeling approach in order to allow multiple-oriented analyses may constitute an alternative to overcome this limitation. The qualitative modeling aspects of the MORM (man-machine occupational risk modeling) model are discussed in this article. The model, realized on an object-oriented Petri net tool (CO-OPN), has been developed to simulate and analyze industrial processes in an OH&S perspective. The industrial process is modeled as a set of interconnected subnets (state spaces), which describe its constitutive machines. Process-related factors are introduced, in an explicit way, through machine interconnections and flow properties. While man-machine interactions are modeled as triggering events for the state spaces of the machines, the CREAM cognitive behavior model is used in order to establish the relevant triggering events. In the CO-OPN formalism, the model is expressed as a set of interconnected CO-OPN objects defined over data types expressing the measure attached to the flow of entities transiting through the machines. Constraints on the measures assigned to these entities are used to determine the state changes in each machine. Interconnecting machines implies the composition of such flow and consequently the interconnection of the measure constraints. This is reflected by the construction of constraint enrichment hierarchies, which can be used for simulation and analysis optimization in a clear mathematical framework. The use of Petri nets to perform multiple-oriented analysis opens perspectives in the field of industrial risk management. It may significantly reduce the duration of the assessment process. But, most of all, it opens perspectives in the field of risk comparisons and integrated risk management. Moreover, because of the generic nature of the model and tool used, the same concepts and patterns may be used to model a wide range of systems and application fields.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A dynamical model based on a continuous addition of colored shot noises is presented. The resulting process is colored and non-Gaussian. A general expression for the characteristic function of the process is obtained, which, after a scaling assumption, takes on a form that is the basis of the results derived in the rest of the paper. One of these is an expansion for the cumulants, which are all finite, subject to mild conditions on the functions defining the process. This is in contrast with the Lévy distribution¿which can be obtained from our model in certain limits¿which has no finite moments. The evaluation of the spectral density and the form of the probability density function in the tails of the distribution shows that the model exhibits a power-law spectrum and long tails in a natural way. A careful analysis of the characteristic function shows that it may be separated into a part representing a Lévy process together with another part representing the deviation of our model from the Lévy process. This

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract : The human body is composed of a huge number of cells acting together in a concerted manner. The current understanding is that proteins perform most of the necessary activities in keeping a cell alive. The DNA, on the other hand, stores the information on how to produce the different proteins in the genome. Regulating gene transcription is the first important step that can thus affect the life of a cell, modify its functions and its responses to the environment. Regulation is a complex operation that involves specialized proteins, the transcription factors. Transcription factors (TFs) can bind to DNA and activate the processes leading to the expression of genes into new proteins. Errors in this process may lead to diseases. In particular, some transcription factors have been associated with a lethal pathological state, commonly known as cancer, associated with uncontrolled cellular proliferation, invasiveness of healthy tissues and abnormal responses to stimuli. Understanding cancer-related regulatory programs is a difficult task, often involving several TFs interacting together and influencing each other's activity. This Thesis presents new computational methodologies to study gene regulation. In addition we present applications of our methods to the understanding of cancer-related regulatory programs. The understanding of transcriptional regulation is a major challenge. We address this difficult question combining computational approaches with large collections of heterogeneous experimental data. In detail, we design signal processing tools to recover transcription factors binding sites on the DNA from genome-wide surveys like chromatin immunoprecipitation assays on tiling arrays (ChIP-chip). We then use the localization about the binding of TFs to explain expression levels of regulated genes. In this way we identify a regulatory synergy between two TFs, the oncogene C-MYC and SP1. C-MYC and SP1 bind preferentially at promoters and when SP1 binds next to C-NIYC on the DNA, the nearby gene is strongly expressed. The association between the two TFs at promoters is reflected by the binding sites conservation across mammals, by the permissive underlying chromatin states 'it represents an important control mechanism involved in cellular proliferation, thereby involved in cancer. Secondly, we identify the characteristics of TF estrogen receptor alpha (hERa) target genes and we study the influence of hERa in regulating transcription. hERa, upon hormone estrogen signaling, binds to DNA to regulate transcription of its targets in concert with its co-factors. To overcome the scarce experimental data about the binding sites of other TFs that may interact with hERa, we conduct in silico analysis of the sequences underlying the ChIP sites using the collection of position weight matrices (PWMs) of hERa partners, TFs FOXA1 and SP1. We combine ChIP-chip and ChIP-paired-end-diTags (ChIP-pet) data about hERa binding on DNA with the sequence information to explain gene expression levels in a large collection of cancer tissue samples and also on studies about the response of cells to estrogen. We confirm that hERa binding sites are distributed anywhere on the genome. However, we distinguish between binding sites near promoters and binding sites along the transcripts. The first group shows weak binding of hERa and high occurrence of SP1 motifs, in particular near estrogen responsive genes. The second group shows strong binding of hERa and significant correlation between the number of binding sites along a gene and the strength of gene induction in presence of estrogen. Some binding sites of the second group also show presence of FOXA1, but the role of this TF still needs to be investigated. Different mechanisms have been proposed to explain hERa-mediated induction of gene expression. Our work supports the model of hERa activating gene expression from distal binding sites by interacting with promoter bound TFs, like SP1. hERa has been associated with survival rates of breast cancer patients, though explanatory models are still incomplete: this result is important to better understand how hERa can control gene expression. Thirdly, we address the difficult question of regulatory network inference. We tackle this problem analyzing time-series of biological measurements such as quantification of mRNA levels or protein concentrations. Our approach uses the well-established penalized linear regression models where we impose sparseness on the connectivity of the regulatory network. We extend this method enforcing the coherence of the regulatory dependencies: a TF must coherently behave as an activator, or a repressor on all its targets. This requirement is implemented as constraints on the signs of the regressed coefficients in the penalized linear regression model. Our approach is better at reconstructing meaningful biological networks than previous methods based on penalized regression. The method is tested on the DREAM2 challenge of reconstructing a five-genes/TFs regulatory network obtaining the best performance in the "undirected signed excitatory" category. Thus, these bioinformatics methods, which are reliable, interpretable and fast enough to cover large biological dataset, have enabled us to better understand gene regulation in humans.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on measurement of blood concentrations. Maintaining concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. In the last decades computer programs have been developed to assist clinicians in this assignment. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Method: Literature and Internet search was performed to identify software. All programs were tested on common personal computer. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software's characteristics. Numbers of drugs handled vary widely and 8 programs offer the ability to the user to add its own drug model. 10 computer programs are able to compute Bayesian dosage adaptation based on a blood concentration (a posteriori adjustment) while 9 are also able to suggest a priori dosage regimen (prior to any blood concentration measurement), based on individual patient covariates, such as age, gender, weight. Among those applying Bayesian analysis, one uses the non-parametric approach. The top 2 software emerging from this benchmark are MwPharm and TCIWorks. Other programs evaluated have also a good potential but are less sophisticated (e.g. in terms of storage or report generation) or less user-friendly.¦Conclusion: Whereas 2 integrated programs are at the top of the ranked listed, such complex tools would possibly not fit all institutions, and each software tool must be regarded with respect to individual needs of hospitals or clinicians. Interest in computing tool to support therapeutic monitoring is still growing. Although developers put efforts into it the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capacity of data storage and report generation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research was conducted in the context of the project IRIS 8A Health and Society (2002-2008) and financially supported by the University of Lausanne. It was aomed at developping a model based on the elder people's experience and allowed us to develop a "Portrait evaluation" of fear of falling using their examples and words. It is a very simple evaluation, which can be used by professionals, but by the elder people themselves. The "Portrait evaluation" and the user's guide are on free access, but we would very much approciate to know whether other people or scientists have used it and collect their comments. (contact: Chantal.Piot-Ziegler@unil.ch)The purpose of this study is to create a model grounded in the elderly people's experience allowing the development of an original instrument to evaluate FOF.In a previous study, 58 semi-structured interviews were conducted with community-dwelling elderly people. The qualitative thematic analysis showed that fear of falling was defined through the functional, social and psychological long-term consequences of falls (Piot-Ziegler et al., 2007).In order to reveal patterns in the expression of fear of falling, an original qualitative thematic pattern analysis (QUAlitative Pattern Analysis - QUAPA) is developed and applied on these interviews.The results of this analysis show an internal coherence across the three dimensions (functional, social and psychological). Four different patterns are found, corresponding to four degrees of fear of falling. They are formalized in a fear of falling intensity model.This model leads to a portrait-evaluation for fallers and non-fallers. The evaluation must be confronted to large samples of elderly people, living in different environments. It presents an original alternative to the concept of self-efficacy to evaluate fear of falling in older people.The model of FOF presented in this article is grounded on elderly people's experience. It gives an experiential description of the three dimensions constitutive of FOF and of their evolution as fear increases, and defines an evaluation tool using situations and wordings based on the elderly people's discourse.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: Standard cardiopulmonary bypass (CPB) circuits with their large surface area and volume contribute to postoperative systemic inflammatory reaction and hemodilution. In order to minimize these problems a new approach has been developed resulting in a single disposable, compact arterio-venous loop, which has integral kinetic-assist pumping, oxygenating, air removal, and gross filtration capabilities (CardioVention Inc., Santa Clara, CA, USA). The impact of this system on gas exchange capacity, blood elements and hemolysis is compared to that of a conventional circuit in a model of prolonged perfusion. METHODS: Twelve calves (mean body weight: 72.2+/-3.7 kg) were placed on cardiopulmonary bypass for 6 h with a flow of 5 l/min, and randomly assigned to the CardioVention system (n=6) or a standard CPB circuit (n=6). A standard battery of blood samples was taken before bypass and throughout bypass. Analysis of variance was used for comparison. RESULTS: The hematocrit remained stable throughout the experiment in the CardioVention group, whereas it dropped in the standard group in the early phase of perfusion. When normalized for prebypass values, both profiles differed significantly (P<0.01). Both O2 and CO2 transfers were significantly improved in the CardioVention group (P=0.04 and P<0.001, respectively). There was a slightly higher pressure drop in the CardioVention group but no single value exceeded 112 mmHg. No hemolysis could be detected in either group with all free plasma Hb values below 15 mg/l. Thrombocyte count, when corrected by hematocrit and normalized by prebypass values, exhibited an increased drop in the standard group (P=0.03). CONCLUSION: The CardioVention system with its concept of limited priming volume and exposed foreign surface area, improves gas exchange probably because of the absence of detectable hemodilution, and appears to limit the decrease in the thrombocyte count which may be ascribed to the reduced surface. Despite the volume and surface constraints, no hemolysis could be detected throughout the 6 h full-flow perfusion period.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The interaction of tunneling with groundwater is a problem both from an environmental and an engineering point of view. In fact, tunnel drilling may cause a drawdown of piezometric levels and water inflows into tunnels that may cause problems during excavation of the tunnel. While the influence of tunneling on the regional groundwater systems may be adequately predicted in porous media using analytical solutions, such an approach is difficult to apply in fractured rocks. Numerical solutions are preferable and various conceptual approaches have been proposed to describe and model groundwater flow through fractured rock masses, ranging from equivalent continuum models to discrete fracture network simulation models. However, their application needs many preliminary investigations on the behavior of the groundwater system based on hydrochemical and structural data. To study large scale flow systems in fractured rocks of mountainous terrains, a comprehensive study was conducted in southern Switzerland, using as case studies two infrastructures actually under construction: (i) the Monte Ceneri base railway tunnel (Ticino), and the (ii) San Fedele highway tunnel (Roveredo, Graubiinden). The chosen approach in this study combines the temporal and spatial variation of geochemical and geophysical measurements. About 60 localities from both surface and underlying tunnels were temporarily and spatially monitored during more than one year. At first, the project was focused on the collection of hydrochemical and structural data. A number of springs, selected in the area surrounding the infrastructures, were monitored for discharge, electric conductivity, pH, and temperature. Water samples (springs, tunnel inflows and rains) were taken for isotopic analysis; in particular the stable isotope composition (δ2Η, δ180 values) can reflect the origin of the water, because of spatial (recharge altitude, topography, etc.) and temporal (seasonal) effects on precipitation which in turn strongly influence the isotopic composition of groundwater. Tunnel inflows in the accessible parts of the tunnels were also sampled and, if possible, monitored with time. Noble-gas concentrations and their isotope ratios were used in selected locations to better understand the origin and the circulation of the groundwater. In addition, electrical resistivity and VLF-type electromagnetic surveys were performed to identify water bearing fractures and/or weathered areas that could be intersected at depth during tunnel construction. The main goal of this work was to demonstrate that these hydrogeological data and geophysical methods, combined with structural and hydrogeological information, can be successfully used in order to develop hydrogeological conceptual models of the groundwater flow in regions to be exploited for tunnels. The main results of the project are: (i) to have successfully tested the application of electrical resistivity and VLF-electromagnetic surveys to asses water-bearing zones during tunnel drilling; (ii) to have verified the usefulness of noble gas, major ion and stable isotope compositions as proxies for the detection of faults and to understand the origin of the groundwater and its flow regimes (direct rain water infiltration or groundwater of long residence time); and (iii) to have convincingly tested the combined application of a geochemical and geophysical approach to assess and predict the vulnerability of springs to tunnel drilling. - L'interférence entre eaux souterraines et des tunnels pose des problèmes environnementaux et de génie civile. En fait, la construction d'un tunnel peut faire abaisser le niveau des nappes piézométriques et faire infiltrer de l'eau dans le tunnel et ainsi créer des problème pendant l'excavation. Alors que l'influence de la construction d'un tunnel sur la circulation régionale de l'eau souterraine dans des milieux poreux peut être prédite relativement facilement par des solution analytiques de modèles, ceci devient difficile dans des milieux fissurés. Dans ce cas-là, des solutions numériques sont préférables et plusieurs approches conceptuelles ont été proposées pour décrire et modéliser la circulation d'eau souterraine à travers les roches fissurées, en allant de modèles d'équivalence continue à des modèles de simulation de réseaux de fissures discrètes. Par contre, leur application demande des investigations importantes concernant le comportement du système d'eau souterraine basées sur des données hydrochimiques et structurales. Dans le but d'étudier des grands systèmes de circulation d'eau souterraine dans une région de montagnes, une étude complète a été fait en Suisse italienne, basée sur deux grandes infrastructures actuellement en construction: (i) Le tunnel ferroviaire de base du Monte Ceneri (Tessin) et (ii) le tunnel routière de San Fedele (Roveredo, Grisons). L'approche choisie dans cette étude est la combinaison de variations temporelles et spatiales des mesures géochimiques et géophysiques. Environs 60 localités situées à la surface ainsi que dans les tunnels soujacents ont été suiviès du point de vue temporel et spatial pendant plus de un an. Dans un premier temps le projet se focalisait sur la collecte de données hydrochimiques et structurales. Un certain nombre de sources, sélectionnées dans les environs des infrastructures étudiées ont été suivies pour le débit, la conductivité électrique, le pH et la température. De l'eau (sources, infiltration d'eau de tunnel et pluie) a été échantillonnés pour des analyses isotopiques; ce sont surtout les isotopes stables (δ2Η, δ180) qui peuvent indiquer l'origine d'une eaux, à cause de la dépendance d'effets spatiaux (altitude de recharge, topographie etc.) ainsi que temporels (saisonaux) sur les précipitations météoriques , qui de suite influencent ainsi la composition isotopique de l'eau souterraine. Les infiltrations d'eau dans les tunnels dans les parties accessibles ont également été échantillonnées et si possible suivies au cours du temps. La concentration de gaz nobles et leurs rapports isotopiques ont également été utilisées pour quelques localités pour mieux comprendre l'origine et la circulation de l'eau souterraine. En plus, des campagnes de mesures de la résistivité électrique et électromagnétique de type VLF ont été menées afin d'identifier des zone de fractures ou d'altération qui pourraient interférer avec les tunnels en profondeur pendant la construction. Le but principal de cette étude était de démontrer que ces données hydrogéologiques et géophysiques peuvent être utilisées avec succès pour développer des modèles hydrogéologiques conceptionels de tunnels. Les résultats principaux de ce travail sont : i) d'avoir testé avec succès l'application de méthodes de la tomographie électrique et des campagnes de mesures électromagnétiques de type VLF afin de trouver des zones riches en eau pendant l'excavation d'un tunnel ; ii) d'avoir prouvé l'utilité des gaz nobles, des analyses ioniques et d'isotopes stables pour déterminer l'origine de l'eau infiltrée (de la pluie par le haut ou ascendant de l'eau remontant des profondeurs) et leur flux et pour déterminer la position de failles ; et iii) d'avoir testé d'une manière convainquant l'application combinée de méthodes géochimiques et géophysiques pour juger et prédire la vulnérabilité de sources lors de la construction de tunnels. - L'interazione dei tunnel con il circuito idrico sotterraneo costituisce un problema sia dal punto di vista ambientale che ingegneristico. Lo scavo di un tunnel puô infatti causare abbassamenti dei livelli piezometrici, inoltre le venute d'acqua in galleria sono un notevole problema sia in fase costruttiva che di esercizio. Nel caso di acquiferi in materiale sciolto, l'influenza dello scavo di un tunnel sul circuito idrico sotterraneo, in genere, puô essere adeguatamente predetta attraverso l'applicazione di soluzioni analitiche; al contrario un approccio di questo tipo appare inadeguato nel caso di scavo in roccia. Per gli ammassi rocciosi fratturati sono piuttosto preferibili soluzioni numeriche e, a tal proposito, sono stati proposti diversi approcci concettuali; nella fattispecie l'ammasso roccioso puô essere modellato come un mezzo discreto ο continuo équivalente. Tuttavia, una corretta applicazione di qualsiasi modello numerico richiede necessariamente indagini preliminari sul comportamento del sistema idrico sotterraneo basate su dati idrogeochimici e geologico strutturali. Per approfondire il tema dell'idrogeologia in ammassi rocciosi fratturati tipici di ambienti montani, è stato condotto uno studio multidisciplinare nel sud della Svizzera sfruttando come casi studio due infrastrutture attualmente in costruzione: (i) il tunnel di base del Monte Ceneri (canton Ticino) e (ii) il tunnel autostradale di San Fedele (Roveredo, canton Grigioni). L'approccio di studio scelto ha cercato di integrare misure idrogeochimiche sulla qualité e quantité delle acque e indagini geofisiche. Nella fattispecie sono state campionate le acque in circa 60 punti spazialmente distribuiti sia in superficie che in sotterraneo; laddove possibile il monitoraggio si è temporalmente prolungato per più di un anno. In una prima fase, il progetto di ricerca si è concentrato sull'acquisizione dati. Diverse sorgenti, selezionate nelle aree di possibile influenza attorno allé infrastrutture esaminate, sono state monitorate per quel che concerne i parametri fisico-chimici: portata, conduttività elettrica, pH e temperatura. Campioni d'acqua sono stati prelevati mensilmente su sorgenti, venute d'acqua e precipitazioni, per analisi isotopiche; nella fattispecie, la composizione in isotopi stabili (δ2Η, δ180) tende a riflettere l'origine delle acque, in quanto, variazioni sia spaziali (altitudine di ricarica, topografia, etc.) che temporali (variazioni stagionali) della composizione isotopica delle precipitazioni influenzano anche le acque sotterranee. Laddove possibile, sono state campionate le venute d'acqua in galleria sia puntualmente che al variare del tempo. Le concentrazioni dei gas nobili disciolti nell'acqua e i loro rapporti isotopici sono stati altresi utilizzati in alcuni casi specifici per meglio spiegare l'origine delle acque e le tipologie di circuiti idrici sotterranei. Inoltre, diverse indagini geofisiche di resistività elettrica ed elettromagnetiche a bassissima frequenza (VLF) sono state condotte al fine di individuare le acque sotterranee circolanti attraverso fratture dell'ammasso roccioso. Principale obiettivo di questo lavoro è stato dimostrare come misure idrogeochimiche ed indagini geofisiche possano essere integrate alio scopo di sviluppare opportuni modelli idrogeologici concettuali utili per lo scavo di opere sotterranee. I principali risultati ottenuti al termine di questa ricerca sono stati: (i) aver testato con successo indagini geofisiche (ERT e VLF-EM) per l'individuazione di acque sotterranee circolanti attraverso fratture dell'ammasso roccioso e che possano essere causa di venute d'acqua in galleria durante lo scavo di tunnel; (ii) aver provato l'utilità di analisi su gas nobili, ioni maggiori e isotopi stabili per l'individuazione di faglie e per comprendere l'origine delle acque sotterranee (acque di recente infiltrazione ο provenienti da circolazioni profonde); (iii) aver testato in maniera convincente l'integrazione delle indagini geofisiche e di misure geochimiche per la valutazione della vulnérabilité delle sorgenti durante lo scavo di nuovi tunnel. - "La NLFA (Nouvelle Ligne Ferroviaire à travers les Alpes) axe du Saint-Gothard est le plus important projet de construction de Suisse. En bâtissant la nouvelle ligne du Saint-Gothard, la Suisse réalise un des plus grands projets de protection de l'environnement d'Europe". Cette phrase, qu'on lit comme présentation du projet Alptransit est particulièrement éloquente pour expliquer l'utilité des nouvelles lignes ferroviaires transeuropéens pour le développement durable. Toutefois, comme toutes grandes infrastructures, la construction de nouveaux tunnels ont des impacts inévitables sur l'environnement. En particulier, le possible drainage des eaux souterraines réalisées par le tunnel peut provoquer un abaissement du niveau des nappes piézométriques. De plus, l'écoulement de l'eau à l'intérieur du tunnel, conduit souvent à des problèmes d'ingénierie. Par exemple, d'importantes infiltrations d'eau dans le tunnel peuvent compliquer les phases d'excavation, provoquant un retard dans l'avancement et dans le pire des cas, peuvent mettre en danger la sécurité des travailleurs. Enfin, l'infiltration d'eau peut être un gros problème pendant le fonctionnement du tunnel. Du point de vue de la science, avoir accès à des infrastructures souterraines représente une occasion unique d'obtenir des informations géologiques en profondeur et pour échantillonner des eaux autrement inaccessibles. Dans ce travail, nous avons utilisé une approche pluridisciplinaire qui intègre des mesures d'étude hydrogéochimiques effectués sur les eaux de surface et des investigations géophysiques indirects, tels que la tomographic de résistivité électrique (TRE) et les mesures électromagnétiques de type VLF. L'étude complète a été fait en Suisse italienne, basée sur deux grandes infrastructures actuellement en construction, qui sont le tunnel ferroviaire de base du Monte Ceneri, une partie du susmentionné projet Alptransit, situé entièrement dans le canton Tessin, et le tunnel routière de San Fedele, situé a Roveredo dans le canton des Grisons. Le principal objectif était de montrer comment il était possible d'intégrer les deux approches, géophysiques et géochimiques, afin de répondre à la question de ce que pourraient être les effets possibles dû au drainage causés par les travaux souterrains. L'accès aux galeries ci-dessus a permis une validation adéquate des enquêtes menées confirmant, dans chaque cas, les hypothèses proposées. A cette fin, nous avons fait environ 50 profils géophysiques (28 imageries électrique bidimensionnels et 23 électromagnétiques) dans les zones de possible influence par le tunnel, dans le but d'identifier les fractures et les discontinuités dans lesquelles l'eau souterraine peut circuler. De plus, des eaux ont été échantillonnés dans 60 localités situées la surface ainsi que dans les tunnels subjacents, le suivi mensuelle a duré plus d'un an. Nous avons mesurés tous les principaux paramètres physiques et chimiques: débit, conductivité électrique, pH et température. De plus, des échantillons d'eaux ont été prélevés pour l'analyse mensuelle des isotopes stables de l'hydrogène et de l'oxygène (δ2Η, δ180). Avec ces analyses, ainsi que par la mesure des concentrations des gaz rares dissous dans les eaux et de leurs rapports isotopiques que nous avons effectués dans certains cas spécifiques, il était possible d'expliquer l'origine des différents eaux souterraines, les divers modes de recharge des nappes souterraines, la présence de possible phénomènes de mélange et, en général, de mieux expliquer les circulations d'eaux dans le sous-sol. Le travail, même en constituant qu'une réponse partielle à une question très complexe, a permis d'atteindre certains importants objectifs. D'abord, nous avons testé avec succès l'applicabilité des méthodes géophysiques indirectes (TRE et électromagnétiques de type VLF) pour prédire la présence d'eaux souterraines dans le sous-sol des massifs rocheux. De plus, nous avons démontré l'utilité de l'analyse des gaz rares, des isotopes stables et de l'analyses des ions majeurs pour la détection de failles et pour comprendre l'origine des eaux souterraines (eau de pluie par le haut ou eau remontant des profondeurs). En conclusion, avec cette recherche, on a montré que l'intégration des ces informations (géophysiques et géochimiques) permet le développement de modèles conceptuels appropriés, qui permettant d'expliquer comment l'eau souterraine circule. Ces modèles permettent de prévoir les infiltrations d'eau dans les tunnels et de prédire la vulnérabilité de sources et des autres ressources en eau lors de construction de tunnels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

EXECUTIVE SUMMARY : Evaluating Information Security Posture within an organization is becoming a very complex task. Currently, the evaluation and assessment of Information Security are commonly performed using frameworks, methodologies and standards which often consider the various aspects of security independently. Unfortunately this is ineffective because it does not take into consideration the necessity of having a global and systemic multidimensional approach to Information Security evaluation. At the same time the overall security level is globally considered to be only as strong as its weakest link. This thesis proposes a model aiming to holistically assess all dimensions of security in order to minimize the likelihood that a given threat will exploit the weakest link. A formalized structure taking into account all security elements is presented; this is based on a methodological evaluation framework in which Information Security is evaluated from a global perspective. This dissertation is divided into three parts. Part One: Information Security Evaluation issues consists of four chapters. Chapter 1 is an introduction to the purpose of this research purpose and the Model that will be proposed. In this chapter we raise some questions with respect to "traditional evaluation methods" as well as identifying the principal elements to be addressed in this direction. Then we introduce the baseline attributes of our model and set out the expected result of evaluations according to our model. Chapter 2 is focused on the definition of Information Security to be used as a reference point for our evaluation model. The inherent concepts of the contents of a holistic and baseline Information Security Program are defined. Based on this, the most common roots-of-trust in Information Security are identified. Chapter 3 focuses on an analysis of the difference and the relationship between the concepts of Information Risk and Security Management. Comparing these two concepts allows us to identify the most relevant elements to be included within our evaluation model, while clearing situating these two notions within a defined framework is of the utmost importance for the results that will be obtained from the evaluation process. Chapter 4 sets out our evaluation model and the way it addresses issues relating to the evaluation of Information Security. Within this Chapter the underlying concepts of assurance and trust are discussed. Based on these two concepts, the structure of the model is developed in order to provide an assurance related platform as well as three evaluation attributes: "assurance structure", "quality issues", and "requirements achievement". Issues relating to each of these evaluation attributes are analysed with reference to sources such as methodologies, standards and published research papers. Then the operation of the model is discussed. Assurance levels, quality levels and maturity levels are defined in order to perform the evaluation according to the model. Part Two: Implementation of the Information Security Assurance Assessment Model (ISAAM) according to the Information Security Domains consists of four chapters. This is the section where our evaluation model is put into a welldefined context with respect to the four pre-defined Information Security dimensions: the Organizational dimension, Functional dimension, Human dimension, and Legal dimension. Each Information Security dimension is discussed in a separate chapter. For each dimension, the following two-phase evaluation path is followed. The first phase concerns the identification of the elements which will constitute the basis of the evaluation: ? Identification of the key elements within the dimension; ? Identification of the Focus Areas for each dimension, consisting of the security issues identified for each dimension; ? Identification of the Specific Factors for each dimension, consisting of the security measures or control addressing the security issues identified for each dimension. The second phase concerns the evaluation of each Information Security dimension by: ? The implementation of the evaluation model, based on the elements identified for each dimension within the first phase, by identifying the security tasks, processes, procedures, and actions that should have been performed by the organization to reach the desired level of protection; ? The maturity model for each dimension as a basis for reliance on security. For each dimension we propose a generic maturity model that could be used by every organization in order to define its own security requirements. Part three of this dissertation contains the Final Remarks, Supporting Resources and Annexes. With reference to the objectives of our thesis, the Final Remarks briefly analyse whether these objectives were achieved and suggest directions for future related research. Supporting resources comprise the bibliographic resources that were used to elaborate and justify our approach. Annexes include all the relevant topics identified within the literature to illustrate certain aspects of our approach. Our Information Security evaluation model is based on and integrates different Information Security best practices, standards, methodologies and research expertise which can be combined in order to define an reliable categorization of Information Security. After the definition of terms and requirements, an evaluation process should be performed in order to obtain evidence that the Information Security within the organization in question is adequately managed. We have specifically integrated into our model the most useful elements of these sources of information in order to provide a generic model able to be implemented in all kinds of organizations. The value added by our evaluation model is that it is easy to implement and operate and answers concrete needs in terms of reliance upon an efficient and dynamic evaluation tool through a coherent evaluation system. On that basis, our model could be implemented internally within organizations, allowing them to govern better their Information Security. RÉSUMÉ : Contexte général de la thèse L'évaluation de la sécurité en général, et plus particulièrement, celle de la sécurité de l'information, est devenue pour les organisations non seulement une mission cruciale à réaliser, mais aussi de plus en plus complexe. A l'heure actuelle, cette évaluation se base principalement sur des méthodologies, des bonnes pratiques, des normes ou des standards qui appréhendent séparément les différents aspects qui composent la sécurité de l'information. Nous pensons que cette manière d'évaluer la sécurité est inefficiente, car elle ne tient pas compte de l'interaction des différentes dimensions et composantes de la sécurité entre elles, bien qu'il soit admis depuis longtemps que le niveau de sécurité globale d'une organisation est toujours celui du maillon le plus faible de la chaîne sécuritaire. Nous avons identifié le besoin d'une approche globale, intégrée, systémique et multidimensionnelle de l'évaluation de la sécurité de l'information. En effet, et c'est le point de départ de notre thèse, nous démontrons que seule une prise en compte globale de la sécurité permettra de répondre aux exigences de sécurité optimale ainsi qu'aux besoins de protection spécifiques d'une organisation. Ainsi, notre thèse propose un nouveau paradigme d'évaluation de la sécurité afin de satisfaire aux besoins d'efficacité et d'efficience d'une organisation donnée. Nous proposons alors un modèle qui vise à évaluer d'une manière holistique toutes les dimensions de la sécurité, afin de minimiser la probabilité qu'une menace potentielle puisse exploiter des vulnérabilités et engendrer des dommages directs ou indirects. Ce modèle se base sur une structure formalisée qui prend en compte tous les éléments d'un système ou programme de sécurité. Ainsi, nous proposons un cadre méthodologique d'évaluation qui considère la sécurité de l'information à partir d'une perspective globale. Structure de la thèse et thèmes abordés Notre document est structuré en trois parties. La première intitulée : « La problématique de l'évaluation de la sécurité de l'information » est composée de quatre chapitres. Le chapitre 1 introduit l'objet de la recherche ainsi que les concepts de base du modèle d'évaluation proposé. La maniéré traditionnelle de l'évaluation de la sécurité fait l'objet d'une analyse critique pour identifier les éléments principaux et invariants à prendre en compte dans notre approche holistique. Les éléments de base de notre modèle d'évaluation ainsi que son fonctionnement attendu sont ensuite présentés pour pouvoir tracer les résultats attendus de ce modèle. Le chapitre 2 se focalise sur la définition de la notion de Sécurité de l'Information. Il ne s'agit pas d'une redéfinition de la notion de la sécurité, mais d'une mise en perspectives des dimensions, critères, indicateurs à utiliser comme base de référence, afin de déterminer l'objet de l'évaluation qui sera utilisé tout au long de notre travail. Les concepts inhérents de ce qui constitue le caractère holistique de la sécurité ainsi que les éléments constitutifs d'un niveau de référence de sécurité sont définis en conséquence. Ceci permet d'identifier ceux que nous avons dénommés « les racines de confiance ». Le chapitre 3 présente et analyse la différence et les relations qui existent entre les processus de la Gestion des Risques et de la Gestion de la Sécurité, afin d'identifier les éléments constitutifs du cadre de protection à inclure dans notre modèle d'évaluation. Le chapitre 4 est consacré à la présentation de notre modèle d'évaluation Information Security Assurance Assessment Model (ISAAM) et la manière dont il répond aux exigences de l'évaluation telle que nous les avons préalablement présentées. Dans ce chapitre les concepts sous-jacents relatifs aux notions d'assurance et de confiance sont analysés. En se basant sur ces deux concepts, la structure du modèle d'évaluation est développée pour obtenir une plateforme qui offre un certain niveau de garantie en s'appuyant sur trois attributs d'évaluation, à savoir : « la structure de confiance », « la qualité du processus », et « la réalisation des exigences et des objectifs ». Les problématiques liées à chacun de ces attributs d'évaluation sont analysées en se basant sur l'état de l'art de la recherche et de la littérature, sur les différentes méthodes existantes ainsi que sur les normes et les standards les plus courants dans le domaine de la sécurité. Sur cette base, trois différents niveaux d'évaluation sont construits, à savoir : le niveau d'assurance, le niveau de qualité et le niveau de maturité qui constituent la base de l'évaluation de l'état global de la sécurité d'une organisation. La deuxième partie: « L'application du Modèle d'évaluation de l'assurance de la sécurité de l'information par domaine de sécurité » est elle aussi composée de quatre chapitres. Le modèle d'évaluation déjà construit et analysé est, dans cette partie, mis dans un contexte spécifique selon les quatre dimensions prédéfinies de sécurité qui sont: la dimension Organisationnelle, la dimension Fonctionnelle, la dimension Humaine, et la dimension Légale. Chacune de ces dimensions et son évaluation spécifique fait l'objet d'un chapitre distinct. Pour chacune des dimensions, une évaluation en deux phases est construite comme suit. La première phase concerne l'identification des éléments qui constituent la base de l'évaluation: ? Identification des éléments clés de l'évaluation ; ? Identification des « Focus Area » pour chaque dimension qui représentent les problématiques se trouvant dans la dimension ; ? Identification des « Specific Factors » pour chaque Focus Area qui représentent les mesures de sécurité et de contrôle qui contribuent à résoudre ou à diminuer les impacts des risques. La deuxième phase concerne l'évaluation de chaque dimension précédemment présentées. Elle est constituée d'une part, de l'implémentation du modèle général d'évaluation à la dimension concernée en : ? Se basant sur les éléments spécifiés lors de la première phase ; ? Identifiant les taches sécuritaires spécifiques, les processus, les procédures qui auraient dû être effectués pour atteindre le niveau de protection souhaité. D'autre part, l'évaluation de chaque dimension est complétée par la proposition d'un modèle de maturité spécifique à chaque dimension, qui est à considérer comme une base de référence pour le niveau global de sécurité. Pour chaque dimension nous proposons un modèle de maturité générique qui peut être utilisé par chaque organisation, afin de spécifier ses propres exigences en matière de sécurité. Cela constitue une innovation dans le domaine de l'évaluation, que nous justifions pour chaque dimension et dont nous mettons systématiquement en avant la plus value apportée. La troisième partie de notre document est relative à la validation globale de notre proposition et contient en guise de conclusion, une mise en perspective critique de notre travail et des remarques finales. Cette dernière partie est complétée par une bibliographie et des annexes. Notre modèle d'évaluation de la sécurité intègre et se base sur de nombreuses sources d'expertise, telles que les bonnes pratiques, les normes, les standards, les méthodes et l'expertise de la recherche scientifique du domaine. Notre proposition constructive répond à un véritable problème non encore résolu, auquel doivent faire face toutes les organisations, indépendamment de la taille et du profil. Cela permettrait à ces dernières de spécifier leurs exigences particulières en matière du niveau de sécurité à satisfaire, d'instancier un processus d'évaluation spécifique à leurs besoins afin qu'elles puissent s'assurer que leur sécurité de l'information soit gérée d'une manière appropriée, offrant ainsi un certain niveau de confiance dans le degré de protection fourni. Nous avons intégré dans notre modèle le meilleur du savoir faire, de l'expérience et de l'expertise disponible actuellement au niveau international, dans le but de fournir un modèle d'évaluation simple, générique et applicable à un grand nombre d'organisations publiques ou privées. La valeur ajoutée de notre modèle d'évaluation réside précisément dans le fait qu'il est suffisamment générique et facile à implémenter tout en apportant des réponses sur les besoins concrets des organisations. Ainsi notre proposition constitue un outil d'évaluation fiable, efficient et dynamique découlant d'une approche d'évaluation cohérente. De ce fait, notre système d'évaluation peut être implémenté à l'interne par l'entreprise elle-même, sans recourir à des ressources supplémentaires et lui donne également ainsi la possibilité de mieux gouverner sa sécurité de l'information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To support the analysis of driver behavior at rural freeway work zone lane closure merge points, Center for Transportation Research and Education staff collected traffic data at merge areas using video image processing technology. The collection of data and the calculation of the capacity of lane closures are reported in a companion report, "Traffic Management Strategies for Merge Areas in Rural Interstate Work Zones". These data are used in the work reported in this document and are used to calibrate a microscopic simulation model of a typical, Iowa rural freeway lane closure. The model developed is a high fidelity computer simulation with an animation interface. It simulates traffic operations at a work zone lane closure. This model enables traffic engineers to visually demonstrate the forecasted delay that is likely to result when freeway reconstruction makes it necessary to close freeway lanes. Further, the model is also sensitive to variations in driver behavior and is used to test the impact of slow moving vehicles and other driver behaviors. This report consists of two parts. The first part describes the development of the work zone simulation model. The simulation analysis is calibrated and verified through data collected at a work zone in Interstate Highway 80 in Scott County, Iowa. The second part is a user's manual for the simulation model, which is provided to assist users with its set up and operation. No prior computer programming skills are required to use the simulation model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Colonic endoscopic submucosal dissection (ESD) is challenging as a result of the limited ability of conventional endoscopic instruments to achieve traction and exposure. The aim of this study was to evaluate the feasibility of colonic ESD in a porcine model using a novel endoscopic surgical platform, the Anubiscope (Karl Storz, Tüttlingen, Germany), equipped with two working channels for surgical instruments with four degrees of freedom offering surgical triangulation. METHODS: Nine ESDs were performed by a surgeon without any ESD experience in three swine, at 25, 15, and 10 cm above the anal verge with the Anubiscope. Sixteen ESDs were performed by an experienced endoscopist in five swine using conventional endoscopic instruments. Major ESD steps included the following for both groups: scoring the area, submucosal injection of glycerol, precut, and submucosal dissection. Outcomes measured were as follows: dissection time and speed, specimen size, en bloc dissection, and complications. RESULTS: No perforations occurred in the Anubis group, while there were eight perforations (50 %) in the conventional group (p = 0.02). Complete and en bloc dissections were achieved in all cases in the Anubis group. Mean dissection time for completed cases was statistically significantly shorter in the Anubis group (32.3 ± 16.1 vs. 55.87 ± 7.66 min; p = 0.0019). Mean specimen size was higher in the conventional group (1321 ± 230 vs. 927.77 ± 229.96 mm(2); p = 0.003), but mean dissection speed was similar (35.95 ± 18.93 vs. 23.98 ± 5.02 mm(2)/min in the Anubis and conventional groups, respectively; p = 0.1). CONCLUSIONS: Colonic ESDs were feasible in pig models with the Anubiscope. This surgical endoscopic platform is promising for endoluminal surgical procedures such as ESD, as it is user-friendly, effective, and safe.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have constructed a forward modelling code in Matlab, capable of handling several commonly used electrical and electromagnetic methods in a 1D environment. We review the implemented electromagnetic field equations for grounded wires, frequency and transient soundings and present new solutions in the case of a non-magnetic first layer. The CR1Dmod code evaluates the Hankel transforms occurring in the field equations using either the Fast Hankel Transform based on digital filter theory, or a numerical integration scheme applied between the zeros of the Bessel function. A graphical user interface allows easy construction of 1D models and control of the parameters. Modelling results are in agreement with other authors, but the time of computation is less efficient than other available codes. Nevertheless, the CR1Dmod routine handles complex resistivities and offers solutions based on the full EM-equations as well as the quasi-static approximation. Thus, modelling of effects based on changes in the magnetic permeability and the permittivity is also possible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Se presenta un modelo de análisis del comportamiento informacional global de un colectivo de individuos (estudiantes de la Universitat Oberta de Catalunya) que tienen una percepción positiva sobre el uso de las tecnologías de la información y la comunicación y que realizan un uso intensivo de las mismas.A partir de una aproximación cualitativa, mediante 24 entrevistas y un posterior análisis del contenido, se identifican cuatro perfiles distintos de gestión de la información personal (reactivo, pasivo, exhaustivo y proactivo) en base a diez variables subyacentes (acceso, gestión y usos de la información, competenciasinformacionales, perfil cognitivo, actitud, percepción de las TIC, ámbito académico, profesional y de la vida diaria) y se ponen derelieve las diferencias de comportamiento informacional dependiendo del ámbito en el que se encuentren. La identificación de los perfiles es un estadio básico del diseño centrado en los usuarios que facilita la realización de intervenciones específicas para cada tipo de usuario, respetando requerimientos de herramientasy procesos para que puedan desarrollar su comportamiento informacional de forma eficiente y eficaz.