757 resultados para service delivery models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Queueing systems constitute a central tool in modeling and performance analysis. These types of systems are in our everyday life activities, and the theory of queueing systems was developed to provide models for forecasting behaviors of systems subject to random demand. The practical and useful applications of the discrete-time queues make the researchers to con- tinue making an e ort in analyzing this type of models. Thus the present contribution relates to a discrete-time Geo/G/1 queue in which some messages may need a second service time in addition to the rst essential service. In day-to-day life, there are numerous examples of queueing situations in general, for example, in manufacturing processes, telecommunication, home automation, etc, but in this paper a particular application is the use of video surveil- lance with intrusion recognition where all the arriving messages require the main service and only some may require the subsidiary service provided by the server with di erent types of strategies. We carry out a thorough study of the model, deriving analytical results for the stationary distribution. The generating functions of the number of messages in the queue and in the system are obtained. The generating functions of the busy period as well as the sojourn times of a message in the server, the queue and the system are also provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work motivation construct is central to the theory and practice of many social science disciplines. Yet, due to the novelty of validated measures appropriate for a deep cross-national comparison, studies that contrast different administrative regimes remain scarce. This study represents an initial empirical effort to validate the Public Service Motivation (PSM) instrument proposed by Kim and colleagues (2013) in a previously unstudied context. The two former communist countries analyzed in this dissertation—Belarus and Poland—followed diametrically opposite development strategies: a fully decentralized administrative regime in Poland and a highly centralized regime in Belarus. The employees (n = 677) of public and nonprofit organizations in the border regions of Podlaskie Wojewodstwo (Poland) and Hrodna Voblasc (Belarus) are the subjects of study. ^ Confirmatory factor analysis revealed three dimensions of public service motivation in the two regions: compassion, self-sacrifice, and attraction to public service. The statistical models tested in this dissertation suggest that nonprofit sector employees exhibit higher levels of PSM than their public sector counterparts. Nonprofit sector employees also reveal a similar set of values and work attitudes across the countries. Thus, the study concludes that in terms of PSM, employees of nonprofit organizations constitute a homogenous group that exists atop the administrative regimes. ^ However, the findings propose significant differences between public sector agencies across the two countries. Contrary to expectations, data suggest that organization centralization in Poland is equal to—or for some items even higher than—that of Belarus. We can conclude that the absence of administrative decentralization of service provision in a country does not necessarily undermine decentralized practices within organizations. Further analysis reveals strong correlations between organization centralization and PSM for the Polish sample. Meanwhile, in Belarus, correlations between organization centralization items and PSM are weak and mostly insignificant. ^ The analysis indicates other factors beyond organization centralization that significantly impact PSM in both sectors. PSM of the employees in the studied region is highly correlated with their participation in religious practices, political parties, or labor unions as well as location of their organization in a capital and type of social service provided.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity. We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Concrete substructures are often subjected to environmental deterioration, such as sulfate and acid attack, which leads to severe damage and causes structure degradation or even failure. In order to improve the durability of concrete, the High Performance Concrete (HPC) has become widely used by partially replacing cement with pozzolanic materials. However, HPC degradation mechanisms in sulfate and acidic environments are not completely understood. It is therefore important to evaluate the performance of the HPC in such conditions and predict concrete service life by establishing degradation models. This study began with a review of available environmental data in the State of Florida. A total of seven bridges have been inspected. Concrete cores were taken from these bridge piles and were subjected for microstructural analysis using Scanning Electron Microscope (SEM). Ettringite is found to be the products of sulfate attack in sulfate and acidic condition. In order to quantitatively analyze concrete deterioration level, an image processing program is designed using Matlab to obtain quantitative data. Crack percentage (Acrack/Asurface) is used to evaluate concrete deterioration. Thereafter, correlation analysis was performed to find the correlation between five related variables and concrete deterioration. Environmental sulfate concentration and bridge age were found to be positively correlated, while environmental pH level was found to be negatively correlated. Besides environmental conditions, concrete property factor was also included in the equation. It was derived from laboratory testing data. Experimental tests were carried out implementing accelerated expansion test under controlled environment. Specimens of eight different mix designs were prepared. The effect of pozzolanic replacement rate was taken into consideration in the empirical equation. And the empirical equation was validated with existing bridges. Results show that the proposed equations compared well with field test results with a maximum deviation of ± 20%. Two examples showing how to use the proposed equations are provided to guide the practical implementation. In conclusion, the proposed approach of relating microcracks to deterioration is a better method than existing diffusion and sorption models since sulfate attack cause cracking in concrete. Imaging technique provided in this study can also be used to quantitatively analyze concrete samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Predicting user behaviour enables user assistant services provide personalized services to the users. This requires a comprehensive user model that can be created by monitoring user interactions and activities. BaranC is a framework that performs user interface (UI) monitoring (and collects all associated context data), builds a user model, and supports services that make use of the user model. A prediction service, Next-App, is built to demonstrate the use of the framework and to evaluate the usefulness of such a prediction service. Next-App analyses a user's data, learns patterns, makes a model for a user, and finally predicts, based on the user model and current context, what application(s) the user is likely to want to use. The prediction is pro-active and dynamic, reflecting the current context, and is also dynamic in that it responds to changes in the user model, as might occur over time as a user's habits change. Initial evaluation of Next-App indicates a high-level of satisfaction with the service.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A comprehensive user model, built by monitoring a user's current use of applications, can be an excellent starting point for building adaptive user-centred applications. The BaranC framework monitors all user interaction with a digital device (e.g. smartphone), and also collects all available context data (such as from sensors in the digital device itself, in a smart watch, or in smart appliances) in order to build a full model of user application behaviour. The model built from the collected data, called the UDI (User Digital Imprint), is further augmented by analysis services, for example, a service to produce activity profiles from smartphone sensor data. The enhanced UDI model can then be the basis for building an appropriate adaptive application that is user-centred as it is based on an individual user model. As BaranC supports continuous user monitoring, an application can be dynamically adaptive in real-time to the current context (e.g. time, location or activity). Furthermore, since BaranC is continuously augmenting the user model with more monitored data, over time the user model changes, and the adaptive application can adapt gradually over time to changing user behaviour patterns. BaranC has been implemented as a service-oriented framework where the collection of data for the UDI and all sharing of the UDI data are kept strictly under the user's control. In addition, being service-oriented allows (with the user's permission) its monitoring and analysis services to be easily used by 3rd parties in order to provide 3rd party adaptive assistant services. An example 3rd party service demonstrator, built on top of BaranC, proactively assists a user by dynamic predication, based on the current context, what apps and contacts the user is likely to need. BaranC introduces an innovative user-controlled unified service model of monitoring and use of personal digital activity data in order to provide adaptive user-centred applications. This aims to improve on the current situation where the diversity of adaptive applications results in a proliferation of applications monitoring and using personal data, resulting in a lack of clarity, a dispersal of data, and a diminution of user control.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objetivo: Identificar las barreras para la unificación de una Historia Clínica Electrónica –HCE- en Colombia. Materiales y Métodos: Se realizó un estudio cualitativo. Se realizaron entrevistas semiestructuradas a profesionales y expertos de 22 instituciones del sector salud, de Bogotá y de los departamentos de Cundinamarca, Santander, Antioquia, Caldas, Huila, Valle del Cauca. Resultados: Colombia se encuentra en una estructuración para la implementación de la Historia Clínica Electrónica Unificada -HCEU-. Actualmente, se encuentra en unificación en 42 IPSs públicas en el departamento de Cundinamarca, el desarrollo de la HCEU en el país es privado y de desarrollo propio debido a las necesidades particulares de cada IPS. Conclusiones: Se identificaron barreras humanas, financieras, legales, organizacionales, técnicas y profesionales en los departamentos entrevistados. Se identificó que la unificación de la HCE depende del acuerdo de voluntades entre las IPSs del sector público, privado, EPSs, y el Gobierno Nacional.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Structured abstract Purpose: To deepen, in grocery retail context, the roles of consumer perceived value and consumer satisfaction, as antecedents’ dimensions of customer loyalty intentions. Design/Methodology/approach: Also employing a short version (12-items) of the original 19-item PERVAL scale of Sweeney & Soutar (2001), a structural equation modeling approach was applied to investigate statistical properties of the indirect influence on loyalty of a reflective second order customer perceived value model. The performance of three alternative estimation methods was compared through bootstrapping techniques. Findings: Results provided i) support for the use of the short form of the PERVAL scale in measuring consumer perceived value; ii) the influence of the four highly correlated independent latent predictors on satisfaction was well summarized by a higher-order reflective specification of consumer perceived value; iii) emotional and functional dimensions were determinants for the relationship with the retailer; iv) parameter’s bias with the three methods of estimation was only significant for bootstrap small sample sizes. Research limitations:/implications: Future research is needed to explore the use of the short form of the PERVAL scale in more homogeneous groups of consumers. Originality/value: Firstly, to indirectly explain customer loyalty mediated by customer satisfaction it was adopted a recent short form of PERVAL scale and a second order reflective conceptualization of value. Secondly, three alternative estimation methods were used and compared through bootstrapping and simulation procedures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the CERN LHC program underway, there has been an acceleration of data growth in the High Energy Physics (HEP) field and the usage of Machine Learning (ML) in HEP will be critical during the HL-LHC program when the data that will be produced will reach the exascale. ML techniques have been successfully used in many areas of HEP nevertheless, the development of a ML project and its implementation for production use is a highly time-consuming task and requires specific skills. Complicating this scenario is the fact that HEP data is stored in ROOT data format, which is mostly unknown outside of the HEP community. The work presented in this thesis is focused on the development of a ML as a Service (MLaaS) solution for HEP, aiming to provide a cloud service that allows HEP users to run ML pipelines via HTTP calls. These pipelines are executed by using the MLaaS4HEP framework, which allows reading data, processing data, and training ML models directly using ROOT files of arbitrary size from local or distributed data sources. Such a solution provides HEP users non-expert in ML with a tool that allows them to apply ML techniques in their analyses in a streamlined manner. Over the years the MLaaS4HEP framework has been developed, validated, and tested and new features have been added. A first MLaaS solution has been developed by automatizing the deployment of a platform equipped with the MLaaS4HEP framework. Then, a service with APIs has been developed, so that a user after being authenticated and authorized can submit MLaaS4HEP workflows producing trained ML models ready for the inference phase. A working prototype of this service is currently running on a virtual machine of INFN-Cloud and is compliant to be added to the INFN Cloud portfolio of services.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The interpretation of phase equilibrium and mass transport phenomena in gas/solvent - polymer system at molten or glassy state is relevant in many industrial applications. Among tools available for the prediction of thermodynamics properties in these systems, at molten/rubbery state, is the group contribution lattice-fluid equation of state (GCLF-EoS), developed by Lee and Danner and ultimately based on Panayiotou and Vera LF theory. On the other side, a thermodynamic approach namely non-equilibrium lattice-fluid (NELF) was proposed by Doghieri and Sarti to consistently extend the description of thermodynamic properties of solute polymer systems obtained through a suitable equilibrium model to the case of non-equilibrium conditions below the glass transition temperature. The first objective of this work is to investigate the phase behaviour in solvent/polymer at glassy state by using NELF model and to develop a predictive tool for gas or vapor solubility that could be applied in several different applications: membrane gas separation, barrier materials for food packaging, polymer-based gas sensors and drug delivery devices. Within the efforts to develop a predictive tool of this kind, a revision of the group contribution method developed by High and Danner for the application of LF model by Panayiotou and Vera is considered, with reference to possible alternatives for the mixing rule for characteristic interaction energy between segments. The work also devotes efforts to the analysis of gas permeability in polymer composite materials as formed by a polymer matrix in which domains are dispersed of a second phase and attention is focused on relation for deviation from Maxwell law as function of arrangement, shape of dispersed domains and loading.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The pervasive availability of connected devices in any industrial and societal sector is pushing for an evolution of the well-established cloud computing model. The emerging paradigm of the cloud continuum embraces this decentralization trend and envisions virtualized computing resources physically located between traditional datacenters and data sources. By totally or partially executing closer to the network edge, applications can have quicker reactions to events, thus enabling advanced forms of automation and intelligence. However, these applications also induce new data-intensive workloads with low-latency constraints that require the adoption of specialized resources, such as high-performance communication options (e.g., RDMA, DPDK, XDP, etc.). Unfortunately, cloud providers still struggle to integrate these options into their infrastructures. That risks undermining the principle of generality that underlies the cloud computing scale economy by forcing developers to tailor their code to low-level APIs, non-standard programming models, and static execution environments. This thesis proposes a novel system architecture to empower cloud platforms across the whole cloud continuum with Network Acceleration as a Service (NAaaS). To provide commodity yet efficient access to acceleration, this architecture defines a layer of agnostic high-performance I/O APIs, exposed to applications and clearly separated from the heterogeneous protocols, interfaces, and hardware devices that implement it. A novel system component embodies this decoupling by offering a set of agnostic OS features to applications: memory management for zero-copy transfers, asynchronous I/O processing, and efficient packet scheduling. This thesis also explores the design space of the possible implementations of this architecture by proposing two reference middleware systems and by adopting them to support interactive use cases in the cloud continuum: a serverless platform and an Industry 4.0 scenario. A detailed discussion and a thorough performance evaluation demonstrate that the proposed architecture is suitable to enable the easy-to-use, flexible integration of modern network acceleration into next-generation cloud platforms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The scientific success of the LHC experiments at CERN highly depends on the availability of computing resources which efficiently store, process, and analyse the amount of data collected every year. This is ensured by the Worldwide LHC Computing Grid infrastructure that connect computing centres distributed all over the world with high performance network. LHC has an ambitious experimental program for the coming years, which includes large investments and improvements both for the hardware of the detectors and for the software and computing systems, in order to deal with the huge increase in the event rate expected from the High Luminosity LHC (HL-LHC) phase and consequently with the huge amount of data that will be produced. Since few years the role of Artificial Intelligence has become relevant in the High Energy Physics (HEP) world. Machine Learning (ML) and Deep Learning algorithms have been successfully used in many areas of HEP, like online and offline reconstruction programs, detector simulation, object reconstruction, identification, Monte Carlo generation, and surely they will be crucial in the HL-LHC phase. This thesis aims at contributing to a CMS R&D project, regarding a ML "as a Service" solution for HEP needs (MLaaS4HEP). It consists in a data-service able to perform an entire ML pipeline (in terms of reading data, processing data, training ML models, serving predictions) in a completely model-agnostic fashion, directly using ROOT files of arbitrary size from local or distributed data sources. This framework has been updated adding new features in the data preprocessing phase, allowing more flexibility to the user. Since the MLaaS4HEP framework is experiment agnostic, the ATLAS Higgs Boson ML challenge has been chosen as physics use case, with the aim to test MLaaS4HEP and the contribution done with this work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nell'ambito della loro trasformazione digitale, molte organizzazioni stanno adottando nuove tecnologie per supportare lo sviluppo, l'implementazione e la gestione delle proprie architetture basate su microservizi negli ambienti cloud e tra i fornitori di cloud. In questo scenario, le service ed event mesh stanno emergendo come livelli infrastrutturali dinamici e configurabili che facilitano interazioni complesse e la gestione di applicazioni basate su microservizi e servizi cloud. L’obiettivo di questo lavoro è quello di analizzare soluzioni mesh open-source (istio, Linkerd, Apache EventMesh) dal punto di vista delle prestazioni, quando usate per gestire la comunicazione tra applicazioni a workflow basate su microservizi all’interno dell’ambiente cloud. A questo scopo è stato realizzato un sistema per eseguire il dislocamento di ognuno dei componenti all’interno di un cluster singolo e in un ambiente multi-cluster. La raccolta delle metriche e la loro sintesi è stata realizzata con un sistema personalizzato, compatibile con il formato dei dati di Prometheus. I test ci hanno permesso di valutare le prestazioni di ogni componente insieme alla sua efficacia. In generale, mentre si è potuta accertare la maturità delle implementazioni di service mesh testate, la soluzione di event mesh da noi usata è apparsa come una tecnologia ancora non matura, a causa di numerosi problemi di funzionamento.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today more than ever, with the recent war in Ukraine and the increasing number of attacks that affect systems of nations and companies every day, the world realizes that cybersecurity can no longer be considered just as a “cost”. It must become a pillar for our infrastructures that involve the security of our nations and the safety of people. Critical infrastructure, like energy, financial services, and healthcare, have become targets of many cyberattacks from several criminal groups, with an increasing number of resources and competencies, putting at risk the security and safety of companies and entire nations. This thesis aims to investigate the state-of-the-art regarding the best practice for securing Industrial control systems. We study the differences between two security frameworks. The first is Industrial Demilitarized Zone (I-DMZ), a perimeter-based security solution. The second one is the Zero Trust Architecture (ZTA) which removes the concept of perimeter to offer an entirely new approach to cybersecurity based on the slogan ‘Never Trust, always verify’. Starting from this premise, the Zero Trust model embeds strict Authentication, Authorization, and monitoring controls for any access to any resource. We have defined two architectures according to the State-of-the-art and the cybersecurity experts’ guidelines to compare I-DMZ, and Zero Trust approaches to ICS security. The goal is to demonstrate how a Zero Trust approach dramatically reduces the possibility of an attacker penetrating the network or moving laterally to compromise the entire infrastructure. A third architecture has been defined based on Cloud and fog/edge computing technology. It shows how Cloud solutions can improve the security and reliability of infrastructure and production processes that can benefit from a range of new functionalities, that the Cloud could offer as-a-Service.We have implemented and tested our Zero Trust solution and its ability to block intrusion or attempted attacks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Prosopis rubriflora and Prosopis ruscifolia are important species in the Chaquenian regions of Brazil. Because of the restriction and frequency of their physiognomy, they are excellent models for conservation genetics studies. The use of microsatellite markers (Simple Sequence Repeats, SSRs) has become increasingly important in recent years and has proven to be a powerful tool for both ecological and molecular studies. In this study, we present the development and characterization of 10 new markers for P. rubriflora and 13 new markers for P. ruscifolia. The genotyping was performed using 40 P. rubriflora samples and 48 P. ruscifolia samples from the Chaquenian remnants in Brazil. The polymorphism information content (PIC) of the P. rubriflora markers ranged from 0.073 to 0.791, and no null alleles or deviation from Hardy-Weinberg equilibrium (HW) were detected. The PIC values for the P. ruscifolia markers ranged from 0.289 to 0.883, but a departure from HW and null alleles were detected for certain loci; however, this departure may have resulted from anthropic activities, such as the presence of livestock, which is very common in the remnant areas. In this study, we describe novel SSR polymorphic markers that may be helpful in future genetic studies of P. rubriflora and P. ruscifolia.