281 resultados para Reusable Passwords
Resumo:
Quick video for iSolutions to sanity check workaround as all staff will be asked to change network passwords which could have a major affecting on staff authenticating to network printers from a Mac. If good can be used by Serviceline. Do not Contact Adam Procter about this
Resumo:
Abstract Passwords are the most common form of authentication, and most of us will have to log in to several accounts every day which require passwords. Unfortunately, passwords often do not do a good job of proving who we are, and come with a host of usability problems. Probably the only reason that passwords still exist is that there often isn't a better alternative, so we are likely to be stuck with them for the foreseeable future. Password cracking has been a problem for years, and becomes more problematic as computer become more powerful and attackers get a better idea of the sort of passwords people use. This presentation will look at two free password cracking tools: Hashcat and John the Ripper, and how even a non-expert on a laptop (i.e. me) can use them effectively. An introduction to some of the research surrounding the economics and usability of passwords will also be discussed. Note that the speaker is not an expert in this area, so it will be a fairly informal since I'm sure you're all tired after a long term.
Resumo:
Este trabajo se centra en el análisis de las actividades desarrolladas en torno a los servicios de procesos de impresión que ofrece la organización DATAPOINT de Colombia SAS para identificar los puntos críticos en la gestión de los residuos de impresión y las decisiones tomadas por parte de los involucrados durante todo el proceso (proveedores, clientes y la empresa), con el fin de revisar medidas y estrategias que permitan fortalecer la gestión integral de residuos de impresión a partir de una revisión y comparación de las mejores prácticas planteadas por los actores del sector. También se efectuaron recomendaciones con acciones de mejora que se podrían desarrollar con el fin de mitigar el impacto ambiental generado por estos residuos. Con la finalidad de cumplir con lo planteado se realizó inicialmente un estudio sobre la organización, sus clientes y proveedores para entender de manera integral la cadena de valor en torno a los tóner y su gestión inversa, (explicar) al igual que el entorno normativo tanto de manera nacional como internacional. Posteriormente, se identificaron los puntos de mejora comparando lo planteado por el proveedor versus lo ejecutado por los involucrados en el proceso, labor se realizó en campo con los clientes para entender la situación actual, sus necesidades y en que basan la toma de decisiones relacionada con el manejo de los residuos de impresión. Finalmente se listaran una serie de acciones de mejora y recomendaciones las cuales pueden ser incorporadas a los procesos críticos de DATAPOINT.
Resumo:
El artículo está incluido en un número monográfico especial con los trabajos del I Simposio Pluridisciplinar sobre Diseño, Evaluación y Descripción de Contenidos Educativos Reutilizables (Guadalajara, Octubre de 2004).Resumen basado en el de la publicación
Resumo:
The evolvability of a software artifact is its capacity for producing heritable or reusable variants; the inverse quality is the artifact's inertia or resistance to evolutionary change. Evolvability in software systems may arise from engineering and/or self-organising processes. We describe our 'Conditional Growth' simulation model of software evolution and show how, it can be used to investigate evolvability from a self-organisation perspective. The model is derived from the Bak-Sneppen family of 'self-organised criticality' simulations. It shows good qualitative agreement with Lehman's 'laws of software evolution' and reproduces phenomena that have been observed empirically. The model suggests interesting predictions about the dynamics of evolvability and implies that much of the observed variability in software evolution can be accounted for by comparatively simple self-organising processes.
Resumo:
We describe a simple, inexpensive, but remarkably versatile and controlled growth environment for the observation of plant germination and seedling root growth on a flat, horizontal surface over periods of weeks. The setup provides to each plant a controlled humidity (between 56% and 91% RH), and contact with both nutrients and atmosphere. The flat and horizontal geometry of the surface supporting the roots eliminates the gravitropic bias on their development and facilitates the imaging of the entire root system. Experiments can be setup under sterile conditions and then transferred to a non-sterile environment. The system can be assembled in 1-2 minutes, costs approximately 8.78$ per plant, is almost entirely reusable (0.43$ per experiment in disposables), and is easily scalable to a variety of plants. We demonstrate the performance of the system by germinating, growing, and imaging Wheat (Triticum aestivum), Corn (Zea mays), and Wisconsin Fast Plants (Brassica rapa). Germination rates were close to those expected for optimal conditions.
Resumo:
Reusable and evolvable Software Engineering Environments (SEES) are essential to software production and have increasingly become a need. In another perspective, software architectures and reference architectures have played a significant role in determining the success of software systems. In this paper we present a reference architecture for SEEs, named RefASSET, which is based on concepts coming from the aspect-oriented approach. This architecture is specialized to the software testing domain and the development of tools for that domain is discussed. This and other case studies have pointed out that the use of aspects in RefASSET provides a better Separation of Concerns, resulting in reusable and evolvable SEEs. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
We here report the first magnetically recoverable Rh(0) nanoparticle-supported catalyst with extraordinary recovery and recycling properties. Magnetic separation has been suggested as a very promising technique to improve recovery of metal-based catalysts in liquid-phase batch reactions. The separation method is significantly simple, as it does not require filtration, decantation, centrifugation, or any other separation technique thereby, overcoming traditional time- and solvent-consuming procedures. Our new magnetically separable catalytic system, comprised of Rh nanoparticles immobilized on silica-coated magnetite nanoparticles, is highly active and could be reused for up to 20 times for hydrogenation of cyclohexene (180,000 mol/mol(Rh)) and benzene (11,550 mol/mol(Rh) under mild conditions. (c) 2007 Elsevier B. V. All fights reserved.
Resumo:
We here report the synthesis, characterization and catalytic performance of new supported Ru(III) and Ru(0) catalysts. In contrast to most supported catalysts, these new developed catalysts for oxidation and hydrogenation reactions were prepared using nearly the same synthetic strategy, and are easily recovered by magnetic separation from liquid phase reactions. The catalysts were found to be active in both forms, Ru(III) and Ru(0), for selective oxidation of alcohols and hydrogenation of olefins, respectively. The catalysts operate under mild conditions to activate molecular oxygen or molecular hydrogen to perform clean conversion of selected substrates. Aryl and alkyl alcohols were converted to aldehydes under mild conditions, with negligible metal leaching. If the metal is properly reduced, Ru(0) nanoparticles immobilized on the magnetic support surface are obtained, and the catalyst becomes active for hydrogenation reactions. (c) 2009 Elsevier B.V. All rights reserved.
Resumo:
We describe the development of a label free method to analyze the interactions between Ca(2+) and the porcine S100A12 protein immobilized on polyvinyl butyral (PVB). The modified gold electrodes were characterized using cyclic voltammetry (CV), electrochemical impedance spectroscopy (EIS), scanning electron microscopy (SEM) and surface plasmon resonance (SPR) techniques. SEM analyses of PVB and PVB-S100A12 showed a heterogeneous distribution of PVB spherules on gold surface. EIS and CV measurements have shown that redox probe reactions on the modified gold electrodes were partially blocked due the adsorption of PVB-S100A12, and confirm the existence of a positive response of the immobilized S100Al2 to the presence of calcium ions. The biosensor exhibited a wide linear response to Ca(2+) concentrations ranging from 12.5 to 200 mM. The PVB-S100A12 seems to be bound to the gold electrode surface by physical adsorption: we observed an increase of 1184.32 m degrees in the SPR angle after the adsorption of the protein on the PVB surface (in an indication that 9.84 ng of S100A12 are adsorbed per mm(2) of the Au-PVB electrode), followed by a further increase of 581.66 m degrees after attachment of the Ca(2+) ions. In addition, no SPR response is obtained for non-specific ions. These studies might be useful as a platform for the design of new reusable and sensitive biosensing devices that could find use in the clinical applications. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Developing successful navigation and mapping strategies is an essential part of autonomous robot research. However, hardware limitations often make for inaccurate systems. This project serves to investigate efficient alternatives to mapping an environment, by first creating a mobile robot, and then applying machine learning to the robot and controlling systems to increase the robustness of the robot system. My mapping system consists of a semi-autonomous robot drone in communication with a stationary Linux computer system. There are learning systems running on both the robot and the more powerful Linux system. The first stage of this project was devoted to designing and building an inexpensive robot. Utilizing my prior experience from independent studies in robotics, I designed a small mobile robot that was well suited for simple navigation and mapping research. When the major components of the robot base were designed, I began to implement my design. This involved physically constructing the base of the robot, as well as researching and acquiring components such as sensors. Implementing the more complex sensors became a time-consuming task, involving much research and assistance from a variety of sources. A concurrent stage of the project involved researching and experimenting with different types of machine learning systems. I finally settled on using neural networks as the machine learning system to incorporate into my project. Neural nets can be thought of as a structure of interconnected nodes, through which information filters. The type of neural net that I chose to use is a type that requires a known set of data that serves to train the net to produce the desired output. Neural nets are particularly well suited for use with robotic systems as they can handle cases that lie at the extreme edges of the training set, such as may be produced by "noisy" sensor data. Through experimenting with available neural net code, I became familiar with the code and its function, and modified it to be more generic and reusable for multiple applications of neural nets.
Resumo:
O objetivo da pesquisa atém-se primeiramente em elaborar um protocolo que permita analisar, por meio de um conjunto de indicadores, o processo de reutilização de software no desenvolvimento de sistemas de informação modelando objetos de negócios. O protocolo concebido compõe-se de um modelo analítico e de grades de análise, a serem empregadas na classificação e tabulação dos dados obtidos empiricamente. Com vistas à validação inicial do protocolo de análise, realiza-se um estudo de caso. A investigação ocorre num dos primeiros e, no momento, maior projeto de fornecimento de elementos de software reutilizáveis destinados a negócios, o IBM SANFRANCISCO, bem como no primeiro projeto desenvolvido no Brasil com base no por ele disponibilizado, o sistema Apontamento Universal de Horas (TIME SHEET System). Quanto à aplicabilidade do protocolo na prática, este se mostra abrangente e adequado à compreensão do processo. Quanto aos resultados do estudo de caso, a análise dos dados revela uma situação em que as expectativas (dos pesquisadores) de reutilização de elementos de software orientadas a negócio eram superiores ao observado. Houve, entretanto, reutilização de elementos de baixo nível, que forneceram a infra-estrutura necessária para o desenvolvimento do projeto. Os resultados contextualizados diante das expectativas de reutilização (dos desenvolvedores) são positivos, na medida em que houve benefícios metodológicos e tecnológicos decorrentes da parceria realizada. Por outro lado, constatam-se alguns aspectos restritivos para o desenvolvedor de aplicativos, em virtude de escolhas arbitrárias realizadas pelo provedor de elementos reutilizáveis.
Resumo:
In this thesis, we present a novel approach to combine both reuse and prediction of dynamic sequences of instructions called Reuse through Speculation on Traces (RST). Our technique allows the dynamic identification of instruction traces that are redundant or predictable, and the reuse (speculative or not) of these traces. RST addresses the issue, present on Dynamic Trace Memoization (DTM), of traces not being reused because some of their inputs are not ready for the reuse test. These traces were measured to be 69% of all reusable traces in previous studies. One of the main advantages of RST over just combining a value prediction technique with an unrelated reuse technique is that RST does not require extra tables to store the values to be predicted. Applying reuse and value prediction in unrelated mechanisms but at the same time may require a prohibitive amount of storage in tables. In RST, the values are already stored in the Trace Memoization Table, and there is no extra cost in reading them if compared with a non-speculative trace reuse technique. . The input context of each trace (the input values of all instructions in the trace) already stores the values for the reuse test, which may also be used for prediction. Our main contributions include: (i) a speculative trace reuse framework that can be adapted to different processor architectures; (ii) specification of the modifications in a superscalar, superpipelined processor in order to implement our mechanism; (iii) study of implementation issues related to this architecture; (iv) study of the performance limits of our technique; (v) a performance study of a realistic, constrained implementation of RST; and (vi) simulation tools that can be used in other studies which represent a superscalar, superpipelined processor in detail. In a constrained architecture with realistic confidence, our RST technique is able to achieve average speedups (harmonic means) of 1.29 over the baseline architecture without reuse and 1.09 over a non-speculative trace reuse technique (DTM).
Resumo:
The work described in this thesis aims to support the distributed design of integrated systems and considers specifically the need for collaborative interaction among designers. Particular emphasis was given to issues which were only marginally considered in previous approaches, such as the abstraction of the distribution of design automation resources over the network, the possibility of both synchronous and asynchronous interaction among designers and the support for extensible design data models. Such issues demand a rather complex software infrastructure, as possible solutions must encompass a wide range of software modules: from user interfaces to middleware to databases. To build such structure, several engineering techniques were employed and some original solutions were devised. The core of the proposed solution is based in the joint application of two homonymic technologies: CAD Frameworks and object-oriented frameworks. The former concept was coined in the late 80's within the electronic design automation community and comprehends a layered software environment which aims to support CAD tool developers, CAD administrators/integrators and designers. The latter, developed during the last decade by the software engineering community, is a software architecture model to build extensible and reusable object-oriented software subsystems. In this work, we proposed to create an object-oriented framework which includes extensible sets of design data primitives and design tool building blocks. Such object-oriented framework is included within a CAD Framework, where it plays important roles on typical CAD Framework services such as design data representation and management, versioning, user interfaces, design management and tool integration. The implemented CAD Framework - named Cave2 - followed the classical layered architecture presented by Barnes, Harrison, Newton and Spickelmier, but the possibilities granted by the use of the object-oriented framework foundations allowed a series of improvements which were not available in previous approaches: - object-oriented frameworks are extensible by design, thus this should be also true regarding the implemented sets of design data primitives and design tool building blocks. This means that both the design representation model and the software modules dealing with it can be upgraded or adapted to a particular design methodology, and that such extensions and adaptations will still inherit the architectural and functional aspects implemented in the object-oriented framework foundation; - the design semantics and the design visualization are both part of the object-oriented framework, but in clearly separated models. This allows for different visualization strategies for a given design data set, which gives collaborating parties the flexibility to choose individual visualization settings; - the control of the consistency between semantics and visualization - a particularly important issue in a design environment with multiple views of a single design - is also included in the foundations of the object-oriented framework. Such mechanism is generic enough to be also used by further extensions of the design data model, as it is based on the inversion of control between view and semantics. The view receives the user input and propagates such event to the semantic model, which evaluates if a state change is possible. If positive, it triggers the change of state of both semantics and view. Our approach took advantage of such inversion of control and included an layer between semantics and view to take into account the possibility of multi-view consistency; - to optimize the consistency control mechanism between views and semantics, we propose an event-based approach that captures each discrete interaction of a designer with his/her respective design views. The information about each interaction is encapsulated inside an event object, which may be propagated to the design semantics - and thus to other possible views - according to the consistency policy which is being used. Furthermore, the use of event pools allows for a late synchronization between view and semantics in case of unavailability of a network connection between them; - the use of proxy objects raised significantly the abstraction of the integration of design automation resources, as either remote or local tools and services are accessed through method calls in a local object. The connection to remote tools and services using a look-up protocol also abstracted completely the network location of such resources, allowing for resource addition and removal during runtime; - the implemented CAD Framework is completely based on Java technology, so it relies on the Java Virtual Machine as the layer which grants the independence between the CAD Framework and the operating system. All such improvements contributed to a higher abstraction on the distribution of design automation resources and also introduced a new paradigm for the remote interaction between designers. The resulting CAD Framework is able to support fine-grained collaboration based on events, so every single design update performed by a designer can be propagated to the rest of the design team regardless of their location in the distributed environment. This can increase the group awareness and allow a richer transfer of experiences among them, improving significantly the collaboration potential when compared to previously proposed file-based or record-based approaches. Three different case studies were conducted to validate the proposed approach, each one focusing one a subset of the contributions of this thesis. The first one uses the proxy-based resource distribution architecture to implement a prototyping platform using reconfigurable hardware modules. The second one extends the foundations of the implemented object-oriented framework to support interface-based design. Such extensions - design representation primitives and tool blocks - are used to implement a design entry tool named IBlaDe, which allows the collaborative creation of functional and structural models of integrated systems. The third case study regards the possibility of integration of multimedia metadata to the design data model. Such possibility is explored in the frame of an online educational and training platform.
Resumo:
Em compras realizadas pela internet ou televendas com cartões de crédito, em muitos países como Brasil e EUA, não há apresentação física do cartão em nenhum momento da compra ou entrega da mercadoria ou serviço, tampouco são populares mecanismos como senhas que assegurem a autenticidade do cartão e seu portador. Ao mesmo tempo, a responsabilidade por assumir os custos nessas transações é dos lojistas. Em todos os estudos anteriores presentes na literatura, a detecção de fraudes com cartões de crédito não abrangia somente esses canais nem focava a detecção nos principais interessados nela, os lojistas. Este trabalho apresenta os resultados da utilização de cinco das técnicas de modelagem mais citadas na literatura e analisa o poder do compartilhamento de dados ao comparar os resultados dos modelos quando processados apenas sobre a base da loja ou com ela compartilhando dados com outros lojistas.