935 resultados para Computer systems organization: general-emerging technologies


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In questo lavoro di tesi si è elaborato un quadro di riferimento per l’utilizzo combinato di due metodologie di valutazione di impatti LCA e RA, per tecnologie emergenti. L’originalità dello studio sta nell’aver proposto e anche applicato il quadro di riferimento ad un caso studio, in particolare ad una tecnologia innovativa di refrigerazione, basata su nanofluidi (NF), sviluppata da partner del progetto Europeo Nanohex che hanno collaborato all’elaborazione degli studi soprattutto per quanto riguarda l’inventario dei dati necessari. La complessità dello studio è da ritrovare tanto nella difficile integrazione di due metodologie nate per scopi differenti e strutturate per assolvere a quegli scopi, quanto nel settore di applicazione che seppur in forte espansione ha delle forti lacune di informazioni circa processi di produzione e comportamento delle sostanze. L’applicazione è stata effettuata sulla produzione di nanofluido (NF) di allumina secondo due vie produttive (single-stage e two-stage) per valutare e confrontare gli impatti per la salute umana e l’ambiente. Occorre specificare che il LCA è stato quantitativo ma non ha considerato gli impatti dei NM nelle categorie di tossicità. Per quanto concerne il RA è stato sviluppato uno studio di tipo qualitativo, a causa della problematica di carenza di parametri tossicologici e di esposizione su citata avente come focus la categoria dei lavoratori, pertanto è stata fatta l’assunzione che i rilasci in ambiente durante la fase di produzione sono trascurabili. Per il RA qualitativo è stato utilizzato un SW specifico, lo Stoffenmanger-Nano che rende possibile la prioritizzazione dei rischi associati ad inalazione in ambiente di lavoro. Il quadro di riferimento prevede una procedura articolata in quattro fasi: DEFINIZIONE SISTEMA TECNOLOGICO, RACCOLTA DATI, VALUTAZIONE DEL RISCHIO E QUANTIFICAZIONE DEGLI IMPATTI, INTERPRETAZIONE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La prova informatica richiede l’adozione di precauzioni come in un qualsiasi altro accertamento scientifico. Si fornisce una panoramica sugli aspetti metodologici e applicativi dell’informatica forense alla luce del recente standard ISO/IEC 27037:2012 in tema di trattamento del reperto informatico nelle fasi di identificazione, raccolta, acquisizione e conservazione del dato digitale. Tali metodologie si attengono scrupolosamente alle esigenze di integrità e autenticità richieste dalle norme in materia di informatica forense, in particolare della Legge 48/2008 di ratifica della Convenzione di Budapest sul Cybercrime. In merito al reato di pedopornografia si offre una rassegna della normativa comunitaria e nazionale, ponendo l’enfasi sugli aspetti rilevanti ai fini dell’analisi forense. Rilevato che il file sharing su reti peer-to-peer è il canale sul quale maggiormente si concentra lo scambio di materiale illecito, si fornisce una panoramica dei protocolli e dei sistemi maggiormente diffusi, ponendo enfasi sulla rete eDonkey e il software eMule che trovano ampia diffusione tra gli utenti italiani. Si accenna alle problematiche che si incontrano nelle attività di indagine e di repressione del fenomeno, di competenza delle forze di polizia, per poi concentrarsi e fornire il contributo rilevante in tema di analisi forensi di sistemi informatici sequestrati a soggetti indagati (o imputati) di reato di pedopornografia: la progettazione e l’implementazione di eMuleForensic consente di svolgere in maniera estremamente precisa e rapida le operazioni di analisi degli eventi che si verificano utilizzando il software di file sharing eMule; il software è disponibile sia in rete all’url http://www.emuleforensic.com, sia come tool all’interno della distribuzione forense DEFT. Infine si fornisce una proposta di protocollo operativo per l’analisi forense di sistemi informatici coinvolti in indagini forensi di pedopornografia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In vielen Bereichen der industriellen Fertigung, wie zum Beispiel in der Automobilindustrie, wer- den digitale Versuchsmodelle (sog. digital mock-ups) eingesetzt, um die Entwicklung komplexer Maschinen m ̈oglichst gut durch Computersysteme unterstu ̈tzen zu k ̈onnen. Hierbei spielen Be- wegungsplanungsalgorithmen eine wichtige Rolle, um zu gew ̈ahrleisten, dass diese digitalen Pro- totypen auch kollisionsfrei zusammengesetzt werden k ̈onnen. In den letzten Jahrzehnten haben sich hier sampling-basierte Verfahren besonders bew ̈ahrt. Diese erzeugen eine große Anzahl von zuf ̈alligen Lagen fu ̈r das ein-/auszubauende Objekt und verwenden einen Kollisionserken- nungsmechanismus, um die einzelnen Lagen auf Gu ̈ltigkeit zu u ̈berpru ̈fen. Daher spielt die Kollisionserkennung eine wesentliche Rolle beim Design effizienter Bewegungsplanungsalgorith- men. Eine Schwierigkeit fu ̈r diese Klasse von Planern stellen sogenannte “narrow passages” dar, schmale Passagen also, die immer dort auftreten, wo die Bewegungsfreiheit der zu planenden Objekte stark eingeschr ̈ankt ist. An solchen Stellen kann es schwierig sein, eine ausreichende Anzahl von kollisionsfreien Samples zu finden. Es ist dann m ̈oglicherweise n ̈otig, ausgeklu ̈geltere Techniken einzusetzen, um eine gute Performance der Algorithmen zu erreichen.rnDie vorliegende Arbeit gliedert sich in zwei Teile: Im ersten Teil untersuchen wir parallele Kollisionserkennungsalgorithmen. Da wir auf eine Anwendung bei sampling-basierten Bewe- gungsplanern abzielen, w ̈ahlen wir hier eine Problemstellung, bei der wir stets die selben zwei Objekte, aber in einer großen Anzahl von unterschiedlichen Lagen auf Kollision testen. Wir im- plementieren und vergleichen verschiedene Verfahren, die auf Hu ̈llk ̈operhierarchien (BVHs) und hierarchische Grids als Beschleunigungsstrukturen zuru ̈ckgreifen. Alle beschriebenen Verfahren wurden auf mehreren CPU-Kernen parallelisiert. Daru ̈ber hinaus vergleichen wir verschiedene CUDA Kernels zur Durchfu ̈hrung BVH-basierter Kollisionstests auf der GPU. Neben einer un- terschiedlichen Verteilung der Arbeit auf die parallelen GPU Threads untersuchen wir hier die Auswirkung verschiedener Speicherzugriffsmuster auf die Performance der resultierenden Algo- rithmen. Weiter stellen wir eine Reihe von approximativen Kollisionstests vor, die auf den beschriebenen Verfahren basieren. Wenn eine geringere Genauigkeit der Tests tolerierbar ist, kann so eine weitere Verbesserung der Performance erzielt werden.rnIm zweiten Teil der Arbeit beschreiben wir einen von uns entworfenen parallelen, sampling- basierten Bewegungsplaner zur Behandlung hochkomplexer Probleme mit mehreren “narrow passages”. Das Verfahren arbeitet in zwei Phasen. Die grundlegende Idee ist hierbei, in der er- sten Planungsphase konzeptionell kleinere Fehler zuzulassen, um die Planungseffizienz zu erh ̈ohen und den resultierenden Pfad dann in einer zweiten Phase zu reparieren. Der hierzu in Phase I eingesetzte Planer basiert auf sogenannten Expansive Space Trees. Zus ̈atzlich haben wir den Planer mit einer Freidru ̈ckoperation ausgestattet, die es erlaubt, kleinere Kollisionen aufzul ̈osen und so die Effizienz in Bereichen mit eingeschr ̈ankter Bewegungsfreiheit zu erh ̈ohen. Optional erlaubt unsere Implementierung den Einsatz von approximativen Kollisionstests. Dies setzt die Genauigkeit der ersten Planungsphase weiter herab, fu ̈hrt aber auch zu einer weiteren Perfor- mancesteigerung. Die aus Phase I resultierenden Bewegungspfade sind dann unter Umst ̈anden nicht komplett kollisionsfrei. Um diese Pfade zu reparieren, haben wir einen neuartigen Pla- nungsalgorithmus entworfen, der lokal beschr ̈ankt auf eine kleine Umgebung um den bestehenden Pfad einen neuen, kollisionsfreien Bewegungspfad plant.rnWir haben den beschriebenen Algorithmus mit einer Klasse von neuen, schwierigen Metall- Puzzlen getestet, die zum Teil mehrere “narrow passages” aufweisen. Unseres Wissens nach ist eine Sammlung vergleichbar komplexer Benchmarks nicht ̈offentlich zug ̈anglich und wir fan- den auch keine Beschreibung von vergleichbar komplexen Benchmarks in der Motion-Planning Literatur.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Makromolekulare Wirkstoffträgersysteme sind von starkem Interesse bezüglich der klinischen Anwendung chemotherapeutischer Agenzien. Um ihr klinisches Potential zu untersuchen ist es von besonderer Bedeutung das pharmakokinetische Profil in vivo zu bestimmen. Jede Veränderung der Polymerstruktur beeinflusst die Körperverteilung des entsprechenden Makromoleküls. Aufgrund dessen benötigt man detailliertes Wissen über Struktur-Eigenschaftsbeziehungen im lebenden Organismus, um das Nanocarrier System für zukünftige Anwendungen einzustellen. In dieser Beziehung stellt das präklinische Screening mittels radioaktiver Markierung und Positronen-Emissions-Tomographie eine nützliche Methode für schnelle sowie quantitative Beobachtung von Wirkstoffträgerkandidaten dar. Insbesondere poly(HPMA) und PEG sind im Arbeitsgebiet Polymer-basierter Therapeutika stark verbreitet und von ihnen abgeleitete Strukturen könnten neue Generationen in diesem Forschungsbereich bieten.rnDie vorliegende Arbeit beschreibt die erfolgreiche Synthese verschiedener HPMA und PEG basierter Polymer-Architekturen – Homopolymere, Statistische und Block copolymere – die mittels RAFT und Reaktivesterchemie durchgeführt wurde. Des Weiteren wurden die genannten Polymere mit Fluor-18 und Iod-131 radioaktiv markiert und mit Hilfe von microPET und ex vivo Biodistributionsstudien in tumortragenden Ratten biologisch evaluiert. Die Variation in Polymer-Architektur und darauffolgende Analyse in vivo resultierte in wichtige Schlussfolgerungen. Das hydrophile / lipophile Gleichgewicht hatte einen bedeutenden Einfluss auf das pharmakokinetische Profil, mit besten in vivo Eigenschaften (geringe Aufnahme in Leber und Milz sowie verlängerte Blutzirkulationszeit) für statistische HPMA-LMA copolymere mit steigendem hydrophoben Anteil. Außerdem zeigten Langzeitstudien mit Iod-131 eine verstärkte Retention von hochmolekularen, HPMA basierten statistischen Copolymeren im Tumorgewebe. Diese Beobachtung bestätigte den bekannten EPR-Effekt. Hinzukommend stellen Überstrukturbildung und damit Polymergröße Schlüsselfaktoren für effizientes Tumor-Targeting dar, da Polymerstrukturen über 200 nm in Durchmesser schnell vom MPS erkannt und vom Blutkreislauf eliminiert werden. Aufgrund dessen wurden die hier synthetisierten HPMA Block copolymere mit PEG Seitengruppen chemisch modifiziert, um eine Verminderung in Größe sowie eine Reduktion in Blutausscheidung zu induzieren. Dieser Ansatz führte zu einer erhöhten Tumoranreicherung im Walker 256 Karzinom Modell. Generell wird die Körperverteilung von HPMA und PEG basierten Polymeren stark durch die Polymer-Architektur sowie das Molekulargewicht beeinflusst. Außerdem hängt ihre Effizienz hinsichtlich Tumorbehandlung deutlich von den individuellen Charakteristika des einzelnen Tumors ab. Aufgrund dieser Beobachtungen betont die hier vorgestellte Dissertation die Notwendigkeit einer detaillierten Polymer-Charakterisierung, kombiniert mit präklinischem Screening, um polymere Wirkstoffträgersysteme für individualisierte Patienten-Therapie in der Zukunft maßzuschneidern.rn

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Java Enterprise Applications (JEAs) are large systems that integrate multiple technologies and programming languages. Transactions in JEAs simplify the development of code that deals with failure recovery and multi-user coordination by guaranteeing atomicity of sets of operations. The heterogeneous nature of JEAs, however, can obfuscate conceptual errors in the application code, and in particular can hide incorrect declarations of transaction scope. In this paper we present a technique to expose and analyze the application transaction scope in JEAs by merging and analyzing information from multiple sources. We also present several novel visualizations that aid in the analysis of transaction scope by highlighting anomalies in the specification of transactions and violations of architectural constraints. We have validated our approach on two versions of a large commercial case study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Java Enterprise Applications (JEAs) are large systems that integrate multiple technologies and programming languages. With the purpose to support the analysis of JEAs we have developed MooseJEE an extension of the \emphMoose environment capable to model the typical elements of JEAs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Java Enterprise Applications (JEAs) are complex software systems written using multiple technologies. Moreover they are usually distributed systems and use a database to deal with persistence. A particular problem that appears in the design of these systems is the lack of a rich business model. In this paper we propose a technique to support the recovery of such rich business objects starting from anemic Data Transfer Objects (DTOs). Exposing the code duplications in the application's elements using the DTOs we suggest which business logic can be moved into the DTOs from the other classes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Extensive research conducted over the past several decades has indicated that semipermeable membrane behavior (i.e., the ability of a porous medium to restrict the passage of solutes) may have a significant influence on solute migration through a wide variety of clay-rich soils, including both natural clay formations (aquitards, aquicludes) and engineered clay barriers (e.g., landfill liners and vertical cutoff walls). Restricted solute migration through clay membranes generally has been described using coupled flux formulations based on nonequilibrium (irreversible) thermodynamics. However, these formulations have differed depending on the assumptions inherent in the theoretical development, resulting in some confusion regarding the applicability of the formulations. Accordingly, a critical review of coupled flux formulations for liquid, current, and solutes through a semipermeable clay membrane under isothermal conditions is undertaken with the goals of explicitly resolving differences among the formulations and illustrating the significance of the differences from theoretical and practical perspectives. Formulations based on single-solute systems (i.e., uncharged solute), single-salt systems, and general systems containing multiple cations or anions are presented. Also, expressions relating the phenomenological coefficients in the coupled flux equations to relevant soil properties (e.g., hydraulic conductivity and effective diffusion coefficient) are summarized for each system. A major difference in the formulations is shown to exist depending on whether counter diffusion or salt diffusion is assumed. This difference between counter and salt diffusion is shown to affect the interpretation of values for the effective diffusion coefficient in a clay membrane based on previously published experimental data. Solute transport theories based on both counter and salt diffusion then are used to re-evaluate previously published column test data for the same clay membrane. The results indicate that, despite the theoretical inconsistency between the counter-diffusion assumption and the salt-diffusion conditions of the experiments, the predictive ability of solute transport theory based on the assumption of counter diffusion is not significantly different from that based on the assumption of salt diffusion, provided that the input parameters used in each theory are derived under the same assumption inherent in the theory. Nonetheless, salt-diffusion theory is fundamentally correct and, therefore, is more appropriate for problems involving salt diffusion in clay membranes. Finally, the fact that solute diffusion cannot occur in an ideal or perfect membrane is not explicitly captured in any of the theoretical expressions for total solute flux in clay membranes, but rather is generally accounted for via inclusion of an effective porosity, n(e), or a restrictive tortuosity factor, tau(r), in the formulation of Fick's first law for diffusion. Both n(e) and tau(r) have been correlated as a linear function of membrane efficiency. This linear correlation is supported theoretically by pore-scale modeling of solid-liquid interactions, but experimental support is limited. Additional data are needed to bolster the validity of the linear correlation for clay membranes. Copyright 2012 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Extensive research conducted over the past several decades has indicated that semipermeable membrane behavior (i.e., the ability of a porous medium to restrict the passage of solutes) may have a significant influence on solute migration through a wide variety of clay-rich soils, including both natural clay formations (aquitards, aquicludes) and engineered clay barriers (e.g., landfill liners and vertical cutoff walls). Restricted solute migration through clay membranes generally has been described using coupled flux formulations based on nonequilibrium (irreversible) thermodynamics. However, these formulations have differed depending on the assumptions inherent in the theoretical development, resulting in some confusion regarding the applicability of the formulations. Accordingly, a critical review of coupled flux formulations for liquid, current, and solutes through a semipermeable clay membrane under isothermal conditions is undertaken with the goals of explicitly resolving differences among the formulations and illustrating the significance of the differences from theoretical and practical perspectives. Formulations based on single-solute systems (i.e., uncharged solute), single-salt systems, and general systems containing multiple cations or anions are presented. Also, expressions relating the phenomenological coefficients in the coupled flux equations to relevant soil properties (e.g., hydraulic conductivity and effective diffusion coefficient) are summarized for each system. A major difference in the formulations is shown to exist depending on whether counter diffusion or salt diffusion is assumed. This difference between counter and salt diffusion is shown to affect the interpretation of values for the effective diffusion coefficient in a clay membrane based on previously published experimental data. Solute transport theories based on both counter and salt diffusion then are used to re-evaluate previously published column test data for the same clay membrane. The results indicate that, despite the theoretical inconsistency between the counter-diffusion assumption and the salt-diffusion conditions of the experiments, the predictive ability of solute transport theory based on the assumption of counter diffusion is not significantly different from that based on the assumption of salt diffusion, provided that the input parameters used in each theory are derived under the same assumption inherent in the theory. Nonetheless, salt-diffusion theory is fundamentally correct and, therefore, is more appropriate for problems involving salt diffusion in clay membranes. Finally, the fact that solute diffusion cannot occur in an ideal or perfect membrane is not explicitly captured in any of the theoretical expressions for total solute flux in clay membranes, but rather is generally accounted for via inclusion of an effective porosity, ne, or a restrictive tortuosity factor, tr, in the formulation of Fick's first law for diffusion. Both ne and tr have been correlated as a linear function of membrane efficiency. This linear correlation is supported theoretically by pore-scale modeling of solid-liquid interactions, but experimental support is limited. Additional data are needed to bolster the validity of the linear correlation for clay membranes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The performance of the parallel vector implementation of the one- and two-dimensional orthogonal transforms is evaluated. The orthogonal transforms are computed using actual or modified fast Fourier transform (FFT) kernels. The factors considered in comparing the speed-up of these vectorized digital signal processing algorithms are discussed and it is shown that the traditional way of comparing th execution speed of digital signal processing algorithms by the ratios of the number of multiplications and additions is no longer effective for vector implementation; the structure of the algorithm must also be considered as a factor when comparing the execution speed of vectorized digital signal processing algorithms. Simulation results on the Cray X/MP with the following orthogonal transforms are presented: discrete Fourier transform (DFT), discrete cosine transform (DCT), discrete sine transform (DST), discrete Hartley transform (DHT), discrete Walsh transform (DWHT), and discrete Hadamard transform (DHDT). A comparison between the DHT and the fast Hartley transform is also included.(34 refs)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This is a link to a handfull of essays I have penned for the Institute for Ethics and Emerging Technologies on the subjects of personhood and transhumanism/H+.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

File system security is fundamental to the security of UNIX and Linux systems since in these systems almost everything is in the form of a file. To protect the system files and other sensitive user files from unauthorized accesses, certain security schemes are chosen and used by different organizations in their computer systems. A file system security model provides a formal description of a protection system. Each security model is associated with specified security policies which focus on one or more of the security principles: confidentiality, integrity and availability. The security policy is not only about “who” can access an object, but also about “how” a subject can access an object. To enforce the security policies, each access request is checked against the specified policies to decide whether it is allowed or rejected. The current protection schemes in UNIX/Linux systems focus on the access control. Besides the basic access control scheme of the system itself, which includes permission bits, setuid and seteuid mechanism and the root, there are other protection models, such as Capabilities, Domain Type Enforcement (DTE) and Role-Based Access Control (RBAC), supported and used in certain organizations. These models protect the confidentiality of the data directly. The integrity of the data is protected indirectly by only allowing trusted users to operate on the objects. The access control decisions of these models depend on either the identity of the user or the attributes of the process the user can execute, and the attributes of the objects. Adoption of these sophisticated models has been slow; this is likely due to the enormous complexity of specifying controls over a large file system and the need for system administrators to learn a new paradigm for file protection. We propose a new security model: file system firewall. It is an adoption of the familiar network firewall protection model, used to control the data that flows between networked computers, toward file system protection. This model can support decisions of access control based on any system generated attributes about the access requests, e.g., time of day. The access control decisions are not on one entity, such as the account in traditional discretionary access control or the domain name in DTE. In file system firewall, the access decisions are made upon situations on multiple entities. A situation is programmable with predicates on the attributes of subject, object and the system. File system firewall specifies the appropriate actions on these situations. We implemented the prototype of file system firewall on SUSE Linux. Preliminary results of performance tests on the prototype indicate that the runtime overhead is acceptable. We compared file system firewall with TE in SELinux to show that firewall model can accommodate many other access control models. Finally, we show the ease of use of firewall model. When firewall system is restricted to specified part of the system, all the other resources are not affected. This enables a relatively smooth adoption. This fact and that it is a familiar model to system administrators will facilitate adoption and correct use. The user study we conducted on traditional UNIX access control, SELinux and file system firewall confirmed that. The beginner users found it easier to use and faster to learn then traditional UNIX access control scheme and SELinux.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Java Enterprise Applications (JEAs) are complex systems composed using various technologies that in turn rely on languages other than Java, such as XML or SQL. Given the complexity of these applications, the need to reverse engineer them in order to support further development becomes critical. In this paper we show how it is possible to split a system into layers and how is possible to interpret the distance between application elements in order to support the refactoring of JEAs. The purpose of this paper is to explore ways to provide suggestions about the refactoring operations to perform on the code by evaluating the distance between layers and elements belonging those layers. We split JEAs into layers by considering the kinds and the purposes of the elements composing the application. We measure distance between elements by using the notion of the shortest path in a graph. Also we present how to enrich the interpretation of the distance value with enterprise pattern detection in order to refine the suggestion about modifications to perform on the code.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a novel vision for further enhanced Internet of Things services. Based on a variety of data (such as location data, ontology-backed search queries, in- and outdoor conditions) the Prometheus framework is intended to support users with helpful recommendations and information preceding a search for context-aware data. Adapted from artificial intelligence concepts, Prometheus proposes user-readjusted answers on umpteen conditions. A number of potential Prometheus framework applications are illustrated. Added value and possible future studies are discussed in the conclusion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A web service is a collection of industry standards to enable reusability of services and interoperability of heterogeneous applications. The UMLS Knowledge Source (UMLSKS) Server provides remote access to the UMLSKS and related resources. We propose a Web Services Architecture that encapsulates UMLSKS-API and makes it available in distributed and heterogeneous environments. This is the first step towards intelligent and automatic UMLS services discovery and invocation by computer systems in distributed environments such as web.