882 resultados para software management infrastructure
Resumo:
Ao longo dos anos, a análise de risco de crédito tem vindo a assumir um papel decisivo na análise do financiamento das empresas, sendo este um elemento fundamental para os órgãos de gestão. O financiamento é um elemento muito importante de suporte à atividade empresarial, uma vez que as empresas não detendo capital para realizar o investimento ou atividades correntes, recorrem ao crédito. Para que as empresas possam diminuir o risco de perdas, elas têm de seguir políticas de análise de crédito e de cobranças muito rigorosas. Este controlo será mais eficaz e eficiente se a organização mantiver relações de proximidade com os seus clientes. Um dos métodos cada vez mais utilizados para se conseguir manter relações estáveis e duradouras passa por adotar estratégias de CRM – Customer Relationship Management. A presente dissertação tem como objetivo desenvolver um modelo de análise de risco de crédito para os clientes da empresa inCentea. Este modelo permitirá perceber se o cliente reúne as condições necessárias para atribuição de crédito, e assim diminuir os risco para a inCentea. Conclui-se que a utilização de um maior número de variáveis na avaliação do risco permite uma minimização do risco. Através da integração do modelo de análise de crédito no software de CRM, a inCentea poderá fundamentar a sua decisão de concessão ou não de crédito com base em indicadores económicos e financeiros.
Resumo:
Secure Access For Everyone (SAFE), is an integrated system for managing trust
using a logic-based declarative language. Logical trust systems authorize each
request by constructing a proof from a context---a set of authenticated logic
statements representing credentials and policies issued by various principals
in a networked system. A key barrier to practical use of logical trust systems
is the problem of managing proof contexts: identifying, validating, and
assembling the credentials and policies that are relevant to each trust
decision.
SAFE addresses this challenge by (i) proposing a distributed authenticated data
repository for storing the credentials and policies; (ii) introducing a
programmable credential discovery and assembly layer that generates the
appropriate tailored context for a given request. The authenticated data
repository is built upon a scalable key-value store with its contents named by
secure identifiers and certified by the issuing principal. The SAFE language
provides scripting primitives to generate and organize logic sets representing
credentials and policies, materialize the logic sets as certificates, and link
them to reflect delegation patterns in the application. The authorizer fetches
the logic sets on demand, then validates and caches them locally for further
use. Upon each request, the authorizer constructs the tailored proof context
and provides it to the SAFE inference for certified validation.
Delegation-driven credential linking with certified data distribution provides
flexible and dynamic policy control enabling security and trust infrastructure
to be agile, while addressing the perennial problems related to today's
certificate infrastructure: automated credential discovery, scalable
revocation, and issuing credentials without relying on centralized authority.
We envision SAFE as a new foundation for building secure network systems. We
used SAFE to build secure services based on case studies drawn from practice:
(i) a secure name service resolver similar to DNS that resolves a name across
multi-domain federated systems; (ii) a secure proxy shim to delegate access
control decisions in a key-value store; (iii) an authorization module for a
networked infrastructure-as-a-service system with a federated trust structure
(NSF GENI initiative); and (iv) a secure cooperative data analytics service
that adheres to individual secrecy constraints while disclosing the data. We
present empirical evaluation based on these case studies and demonstrate that
SAFE supports a wide range of applications with low overhead.
Resumo:
The use of structural health monitoring of civil structures is ever expanding and by assessing the dynamical condition of structures, informed maintenance management can be conducted at both individual and network levels. With the continued growth of information age technology, the potential arises for smart monitoring systems to be integrated with civil infrastructure to provide efficient information on the condition of a structure. The focus of this thesis is the integration of smart technology with civil infrastructure for the purposes of structural health monitoring. The technology considered in this regard are devices based on energy harvesting materials. While there has been considerable focus on the development and optimisation of such devices using steady state loading conditions, their applications for civil infrastructure are less known. Although research is still in initial stages, studies into the uses associated with such applications are very promising. Through the use of the dynamical response of structures to a variety of loading conditions, the energy harvesting outputs from such devices is established and the potential power output determined. Through a power variance output approach, damage detection of deteriorating structures using the energy harvesting devices is investigated. Further applications of the integration of energy harvesting devices with civil infrastructure investigated by this research includes the use of the power output as a indicator for control. Four approaches are undertaken to determine the potential applications arising from integrating smart technology with civil infrastructure, namely • Theoretical analysis to determine the applications of energy harvesting devices for vibration based health monitoring of civil infrastructure. • Laboratory experimentation to verify the performance of different energy harvesting configurations for civil infrastructure applications. • Scaled model testing as a method to experimentally validate the integration of the energy harvesting devices with civil infrastructure. • Full scale deployment of energy harvesting device with a bridge structure. These four approaches validate the application of energy harvesting technology with civil infrastructure from a theoretical, experimental and practical perspective.
Resumo:
This research has explored the relationship between system test complexity and tacit knowledge. It is proposed as part of this thesis, that the process of system testing (comprising of test planning, test development, test execution, test fault analysis, test measurement, and case management), is directly affected by both complexity associated with the system under test, and also by other sources of complexity, independent of the system under test, but related to the wider process of system testing. While a certain amount of knowledge related to the system under test is inherent, tacit in nature, and therefore difficult to make explicit, it has been found that a significant amount of knowledge relating to these other sources of complexity, can indeed be made explicit. While the importance of explicit knowledge has been reinforced by this research, there has been a lack of evidence to suggest that the availability of tacit knowledge to a test team is of any less importance to the process of system testing, when operating in a traditional software development environment. The sentiment was commonly expressed by participants, that even though a considerable amount of explicit knowledge relating to the system is freely available, that a good deal of knowledge relating to the system under test, which is demanded for effective system testing, is actually tacit in nature (approximately 60% of participants operating in a traditional development environment, and 60% of participants operating in an agile development environment, expressed similar sentiments). To cater for the availability of tacit knowledge relating to the system under test, and indeed, both explicit and tacit knowledge required by system testing in general, an appropriate knowledge management structure needs to be in place. This would appear to be required, irrespective of the employed development methodology.
Resumo:
Software engineering researchers are challenged to provide increasingly more pow- erful levels of abstractions to address the rising complexity inherent in software solu- tions. One new development paradigm that places models as abstraction at the fore- front of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code. Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process. The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources. At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM’s synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise. This dissertation investigates how to decouple the DSK from the MoE and sub- sequently producing a generic model of execution (GMoE) from the remaining appli- cation logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis com- ponent of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions. This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.
Resumo:
Over the past few years, logging has evolved from from simple printf statements to more complex and widely used logging libraries. Today logging information is used to support various development activities such as fixing bugs, analyzing the results of load tests, monitoring performance and transferring knowledge. Recent research has examined how to improve logging practices by informing developers what to log and where to log. Furthermore, the strong dependence on logging has led to the development of logging libraries that have reduced the intricacies of logging, which has resulted in an abundance of log information. Two recent challenges have emerged as modern software systems start to treat logging as a core aspect of their software. In particular, 1) infrastructural challenges have emerged due to the plethora of logging libraries available today and 2) processing challenges have emerged due to the large number of log processing tools that ingest logs and produce useful information from them. In this thesis, we explore these two challenges. We first explore the infrastructural challenges that arise due to the plethora of logging libraries available today. As systems evolve, their logging infrastructure has to evolve (commonly this is done by migrating to new logging libraries). We explore logging library migrations within Apache Software Foundation (ASF) projects. We i find that close to 14% of the pro jects within the ASF migrate their logging libraries at least once. For processing challenges, we explore the different factors which can affect the likelihood of a logging statement changing in the future in four open source systems namely ActiveMQ, Camel, Cloudstack and Liferay. Such changes are likely to negatively impact the log processing tools that must be updated to accommodate such changes. We find that 20%-45% of the logging statements within the four systems are changed at least once. We construct random forest classifiers and Cox models to determine the likelihood of both just-introduced and long-lived logging statements changing in the future. We find that file ownership, developer experience, log density and SLOC are important factors in determining the stability of logging statements.
Resumo:
This book brings together experts in the fields of spatial planning, landuse and infrastructure management to explore the emerging agenda of spatially-oriented integrated evaluation. It weaves together the latest theories, case studies, methods, policy and practice to examine and assess the values, impacts, benefits and the overall success in integrated land-use management. In doing so, the book clarifies the nature and roles of evaluation and puts forward guidance for future policy and practice.
Resumo:
This book brings together experts in the fields of spatial planning, landuse and infrastructure management to explore the emerging agenda of spatially-oriented integrated evaluation. It weaves together the latest theories, case studies, methods, policy and practice to examine and assess the values, impacts, benefits and the overall success in integrated land-use management. In doing so, the book clarifies the nature and roles of evaluation and puts forward guidance for future policy and practice.
Resumo:
En los últimos años el término Economía Colaborativa se ha popularizado sin que, hasta el momento, haya sido definido de manera inequívoca. Bajo esta denominación se engloban experiencias tan diversas como bancos de tiempo, huertos urbanos, startups o grandes plataformas digitales. La proliferación de este tipo de iniciativas puede relacionarse con una multiplicidad de factores tales como el desarrollo tecnológico, la recesión económica y otras crisis superpuestas (medioambiental, de cuidados, de valores, de lo político) y un cierto cambio en los valores sociales. Entre 2014-2015 se han realizado dos investigaciones en Andalucía de manera casi paralela y con una metodología similar. La primera de ellas pretendía identificar prácticas de Economía Colaborativa en el entorno universitario. La segunda investigación identificaba experiencias de emprendimiento a nivel autonómico. A luz de los resultados obtenidos se plantea la siguiente cuestión sobre la naturaleza misma de la Economía Colaborativa: ¿nos encontramos ante prácticas postcapitalistas que abren el camino a una sociedad más justa e igualitaria o, más bien, estamos ante una respuesta del capital para, una vez más, seguir extrayendo de manera privada el valor que se genera socialmente? Este artículo, partiendo del análisis del conjunto de iniciativas detentadas en Andalucía, se centra en aquellas basadas en el software libre y la producción digital concluyendo cómo, gracias a la incorporación de ciertos aspectos de la ética hacker y las lógicas del conocimiento abierto, éstas pueden situarse dentro de un escenario de fomento de los comunes globales frente a las lógicas imperantes del capitalismo netárquico.
Resumo:
In recent years, the adaptation of Wireless Sensor Networks (WSNs) to application areas requiring mobility increased the security threats against confidentiality, integrity and privacy of the information as well as against their connectivity. Since, key management plays an important role in securing both information and connectivity, a proper authentication and key management scheme is required in mobility enabled applications where the authentication of a node with the network is a critical issue. In this paper, we present an authentication and key management scheme supporting node mobility in a heterogeneous WSN that consists of several low capabilities sensor nodes and few high capabilities sensor nodes. We analyze our proposed solution by using MATLAB (analytically) and by simulation (OMNET++ simulator) to show that it has less memory requirement and has good network connectivity and resilience against attacks compared to some existing schemes. We also propose two levels of secure authentication methods for the mobile sensor nodes for secure authentication and key establishment.
Resumo:
This thesis analyses the influence of qualitative and quantitative herbage production on seasonal rangelands, and of herd and pasture use strategies on feed intake, body mass development and reproductive performance of sheep and goats in the Altai mountain region of Bulgan county (soum) in Khovd province (aimag). This westernmost county of Mongolia is characterized by a very poor road network and thus very difficult access to regional and national markets. The thesis explores in this localized context the current rural development, the economic settings and political measures that affect the traditional extensive livestock husbandry system and its importance for rural livelihoods. Livestock management practices still follow the traditional transhumant mode, fully relying on natural pasture. This renders animal feeding very vulnerable to the highly variable climatic conditions which is one of many reasons for gradually declining quantity and quality of pasture vegetation. Small ruminants, and especially goats, are the main important species securing economic viability of their owners’ livelihood, and they are well adapted to the harsh continental climate and the present low input management practices. It is likely that small ruminants will keep their vital role for the rural community in the future, since the weak local infrastructure and slow market developments currently do not allow many income diversification options. Since the profitability of a single animal is low, animal numbers tend to increase, whereas herd management does not change. Possibilities to improve the current livestock management and thus herders’ livelihoods in an environmentally, economically and socially sustainable manner are simulated through bio-economic modelling and the implications are discussed at the regional and national scale. To increase the welfare of the local population, a substantial infrastructural and market development is needed, which needs to be accompanied by suitable pasture management schemes and policies
Resumo:
This keynote presentation will report some of our research work and experience on the development and applications of relevant methods, models, systems and simulation techniques in support of different types and various levels of decision making for business, management and engineering. In particular, the following topics will be covered. Modelling, multi-agent-based simulation and analysis of the allocation management of carbon dioxide emission permits in China (Nanfeng Liu & Shuliang Li Agent-based simulation of the dynamic evolution of enterprise carbon assets (Yin Zeng & Shuliang Li) A framework & system for extracting and representing project knowledge contexts using topic models and dynamic knowledge maps: a big data perspective (Jin Xu, Zheng Li, Shuliang Li & Yanyan Zhang) Open innovation: intelligent model, social media & complex adaptive system simulation (Shuliang Li & Jim Zheng Li) A framework, model and software prototype for modelling and simulation for deshopping behaviour and how companies respond (Shawkat Rahman & Shuliang Li) Integrating multiple agents, simulation, knowledge bases and fuzzy logic for international marketing decision making (Shuliang Li & Jim Zheng Li) A Web-based hybrid intelligent system for combined conventional, digital, mobile, social media and mobile marketing strategy formulation (Shuliang Li & Jim Zheng Li) A hybrid intelligent model for Web & social media dynamics, and evolutionary and adaptive branding (Shuliang Li) A hybrid paradigm for modelling, simulation and analysis of brand virality in social media (Shuliang Li & Jim Zheng Li) Network configuration management: attack paradigms and architectures for computer network survivability (Tero Karvinen & Shuliang Li)
Resumo:
This project supported the planning and conduct of a two-day Iowa Department of Transportation–hosted peer exchange for state agencies that have implemented some or all of the suggested strategies outlined in the Second Strategic Highway Research Program–sponsored project R10, Project Management Strategies for Complex Projects. Presentations were made by participating states, and several opportunities were provided for directed discussion. General themes emerging from the presentations and discussions were identified as follows: To implement improvements in project management processes, agency leadership needs to decide that a new approach to project management is worth pursuing and then dedicate resources to developing a project management plan. The change to formalized project management and five-dimensional project management (5DPM) requires a culture shift in agencies from segmented “silo” processes to collaborative, cooperative processes that make communication and collaboration high priorities. Agencies need trained project managers who are empowered to execute the project management plan, as well as properly trained functional staff. Project management can be centralized or decentralized with equal effect. After an agency’s project management plan and structure are developed, software tools and other resources should be implemented to support the plan and structure. All projects will benefit from enhanced project management, but the project management plan should specify appropriate approaches for several project levels as defined by factors in addition to dollar value. Project management should be included in an agency’s project development manual.
Resumo:
Gli ultimi 10 anni hanno visto un crescente aumento delle richieste di fornitura di servizi legati alla manutenzione edilizia da parte della Grande Distribuzione Organizzata; la domanda è quella di servizi riconducibili al Facility Management, ovvero rapporti basati sul raggiungimento di standard qualitativi predefiniti in sede contrattuale e garanzia di intervento 24h/24. Nella prima parte del progetto di tesi viene inquadrata la disciplina del FM, le motivazioni, gli strumenti e gli attori coinvolti. Dopo un excursus normativo sulla manutenzione in Italia, una classificazione delle tipologie di intervento manutentivo e una valutazione sull’incidenza della manutenzione nel Life Cycle Cost, viene effettuata un’analisi delle modalità interoperative del FM applicato alla manutenzione edilizia nel caso della GDO. La tesi è stata svolta nell'ambito di un tirocinio in azienda, il che ha permesso alla laureanda di affrontare il caso di studio di un contratto di Global Service con un’importante catena di grande distribuzione, e di utilizzare un software gestionale (PlaNet) con il quale viene tenuta traccia, per ogni punto vendita, degli interventi manutentivi e della loro localizzazione nell’edificio. Questo permette di avere un quadro completo degli interventi, con modalità di attuazione già note, e garantisce una gestione più efficace delle chiamate, seguite tramite un modulo di Call Center integrato. La tesi esamina criticamente i principali documenti di riferimento per l’opera collegati alla manutenzione: il Piano di Manutenzione e il Fascicolo dell’Opera, evidenziando i limiti legati alla non completezza delle informazioni fornite. L’obbiettivo finale della tesi è quello di proporre un documento integrativo tra il Piano di Manutenzione e il Fascicolo, al fine di snellire il flusso informativo e creare un documento di riferimento completo ed esaustivo, che integra sia gli aspetti tecnici delle modalità manutentive, sia le prescrizioni sulla sicurezza.
Resumo:
Software Architecture is a high level description of a software intensive system that enables architects to have a better intellectual control over the complete system. It is also used as a communication vehicle among the various system stakeholders. Variability in software-intensive systems is the ability of a software artefact (e.g., a system, subsystem, or component) to be extended, customised, or configured for deployment in a specific context. Although variability in software architecture is recognised as a challenge in multiple domains, there has been no formal consensus on how variability should be captured or represented. In this research, we addressed the problem of representing variability in software architecture through a three phase approach. First, we examined existing literature using the Systematic Literature Review (SLR) methodology, which helped us identify the gaps and challenges within the current body of knowledge. Equipped with the findings from the SLR, a set of design principles have been formulated that are used to introduce variability management capabilities to an existing Architecture Description Language (ADL). The chosen ADL was developed within our research group (ALI) and to which we have had complete access. Finally, we evaluated the new version of the ADL produced using two distinct case studies: one from the Information Systems domain, an Asset Management System (AMS); and another from the embedded systems domain, a Wheel Brake System (WBS). This thesis presents the main findings from the three phases of the research work, including a comprehensive study of the state-of-the-art; the complete specification of an ADL that is focused on managing variability; and the lessons learnt from the evaluation work of two distinct real-life case studies.