994 resultados para application deployment


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – The purpose of this research is to study the perceived impact of some factors on the resources allocation processes of the Nigerian universities and to suggest a framework that will help practitioners and academics to understand and improve such processes. Design/methodology/approach – The study adopted the interpretive qualitative approach aimed at an ‘in-depth’ understanding of the resource allocation experiences of key university personnel and their perceived impact of the contextual factors affecting such processes. The analysis of individual narratives from each university established the conditions and factors impacting the resources allocation processes within each institution. Findings – The resources allocation process issues in the Nigerian universities may be categorised into people (core and peripheral units’ challenge, and politics and power); process (resources allocation processes); and resources (critical financial shortage and resources dependence response). The study also provides insight that resourcing efficiency in Nigerian universities appears strongly constrained by the rivalry among the resource managers. The efficient resources allocation process (ERAP) model is proposed to resolve the identified resourcing deficiencies. Research limitations/implications – The research is not focused to provide generalizable observations but ‘in-depth’ perceived factors and their impact on the resources allocation processes in Nigerian universities. The study is limited to the internal resources allocation issues within the universities and excludes the external funding factors. The resource managers’ responses to the identified factors may affect their internal resourcing efficiency. Further research using more empirical samples is required to obtain more widespread results and the implications for all universities. Originality/value – This study contributes a fresh literature framework to resources allocation processes focusing at ‘people’, ‘process’ and ‘resources’. Also a middle range theory triangulation is developed in relation to better understanding of resourcing process management. The study will be of interest to university managers and policy makers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to develop an integrated patient-focused analytical framework to improve quality of care in accident and emergency (A&E) unit of a Maltese hospital. Design/methodology/approach – The study adopts a case study approach. First, a thorough literature review has been undertaken to study the various methods of healthcare quality management. Second, a healthcare quality management framework is developed using combined quality function deployment (QFD) and logical framework approach (LFA). Third, the proposed framework is applied to a Maltese hospital to demonstrate its effectiveness. The proposed framework has six steps, commencing with identifying patients’ requirements and concluding with implementing improvement projects. All the steps have been undertaken with the involvement of the concerned stakeholders in the A&E unit of the hospital. Findings – The major and related problems being faced by the hospital under study were overcrowding at A&E and shortage of beds, respectively. The combined framework ensures better A&E services and patient flow. QFD identifies and analyses the issues and challenges of A&E and LFA helps develop project plans for healthcare quality improvement. The important outcomes of implementing the proposed quality improvement programme are fewer hospital admissions, faster patient flow, expert triage and shorter waiting times at the A&E unit. Increased emergency consultant cover and faster first significant medical encounter were required to start addressing the problems effectively. Overall, the combined QFD and LFA method is effective to address quality of care in A&E unit. Practical/implications – The proposed framework can be easily integrated within any healthcare unit, as well as within entire healthcare systems, due to its flexible and user-friendly approach. It could be part of Six Sigma and other quality initiatives. Originality/value – Although QFD has been extensively deployed in healthcare setup to improve quality of care, very little has been researched on combining QFD and LFA in order to identify issues, prioritise them, derive improvement measures and implement improvement projects. Additionally, there is no research on QFD application in A&E. This paper bridges these gaps. Moreover, very little has been written on the Maltese health care system. Therefore, this study contributes demonstration of quality of emergency care in Malta.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, energy efficiency or green IT has become a hot issue for many IT infrastructures as they attempt to utilize energy-efficient strategies in their enterprise IT systems in order to minimize operational costs. Networking devices are shared resources connecting important IT infrastructures, especially in a data center network they are always operated 24/7 which consume a huge amount of energy, and it has been obviously shown that this energy consumption is largely independent of the traffic through the devices. As a result, power consumption in networking devices is becoming more and more a critical problem, which is of interest for both research community and general public. Multicast benefits group communications in saving link bandwidth and improving application throughput, both of which are important for green data center. In this paper, we study the deployment strategy of multicast switches in hybrid mode in energy-aware data center network: a case of famous fat-tree topology. The objective is to find the best location to deploy multicast switch not only to achieve optimal bandwidth utilization but also to minimize power consumption. We show that it is possible to easily achieve nearly 50% of energy consumption after applying our proposed algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this research was to study the nutritional status of United States Coast Guard Law Enforcement Detachment (USCG/ LEDET) personnel before and after prolonged travel at sea. To date there is no information available regarding the nutritional status of Coast Guard personnel. Forty-seven subjects were studied in total, each served as their own control. Demographic and health history data was collected at baseline. Dietary and exercise data was collected before and during the deployment. Body composition was determined before and after a deployment. The results of this study revealed that the USCG/LEDET personnel had high cholesterol and decreased fiber intakes. Cholesterol intake during deployment (516.8±239.7 mg/day) was significantly higher (p= 0. 047) than pre-deployment (448.2 ± 144.3 mg/day). Fiber intake was significantly lower than recommended (p The results of this study indicate that LEDET personnel are put at higher nutritional risk while deployed and also have increased negative health behaviors associated with risk for Cardiovascular Disease (CVD) and other related diseases. This is crucial information for the USCG so that action can be taken to improve the physical well being of their personnel.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traffic demand increases are pushing aging ground transportation infrastructures to their theoretical capacity. The result of this demand is traffic bottlenecks that are a major cause of delay on urban freeways. In addition, the queues associated with those bottlenecks increase the probability of a crash while adversely affecting environmental measures such as emissions and fuel consumption. With limited resources available for network expansion, traffic professionals have developed active traffic management systems (ATMS) in an attempt to mitigate the negative consequences of traffic bottlenecks. Among these ATMS strategies, variable speed limits (VSL) and ramp metering (RM) have been gaining international interests for their potential to improve safety, mobility, and environmental measures at freeway bottlenecks. Though previous studies have shown the tremendous potential of variable speed limit (VSL) and VSL paired with ramp metering (VSLRM) control, little guidance has been developed to assist decision makers in the planning phase of a congestion mitigation project that is considering VSL or VSLRM control. To address this need, this study has developed a comprehensive decision/deployment support tool for the application of VSL and VSLRM control in recurrently congested environments. The decision tool will assist practitioners in deciding the most appropriate control strategy at a candidate site, which candidate sites have the most potential to benefit from the suggested control strategy, and how to most effectively design the field deployment of the suggested control strategy at each implementation site. To do so, the tool is comprised of three key modules, (1) Decision Module, (2) Benefits Module, and (3) Deployment Guidelines Module. Each module uses commonly known traffic flow and geometric parameters as inputs to statistical models and empirically based procedures to provide guidance on the application of VSL and VSLRM at each candidate site. These models and procedures were developed from the outputs of simulated experiments, calibrated with field data. To demonstrate the application of the tool, a list of real-world candidate sites were selected from the Maryland State Highway Administration Mobility Report. Here, field data from each candidate site was input into the tool to illustrate the step-by-step process required for efficient planning of VSL or VSLRM control. The output of the tool includes the suggested control system at each site, a ranking of the sites based on the expected benefit-to-cost ratio, and guidelines on how to deploy the VSL signs, ramp meters, and detectors at the deployment site(s). This research has the potential to assist traffic engineers in the planning of VSL and VSLRM control, thus enhancing the procedure for allocating limited resources for mobility and safety improvements on highways plagued by recurrent congestion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The continuous flow of technological developments in communications and electronic industries has led to the growing expansion of the Internet of Things (IoT). By leveraging the capabilities of smart networked devices and integrating them into existing industrial, leisure and communication applications, the IoT is expected to positively impact both economy and society, reducing the gap between the physical and digital worlds. Therefore, several efforts have been dedicated to the development of networking solutions addressing the diversity of challenges associated with such a vision. In this context, the integration of Information Centric Networking (ICN) concepts into the core of IoT is a research area gaining momentum and involving both research and industry actors. The massive amount of heterogeneous devices, as well as the data they produce, is a significant challenge for a wide-scale adoption of the IoT. In this paper we propose a service discovery mechanism, based on Named Data Networking (NDN), that leverages the use of a semantic matching mechanism for achieving a flexible discovery process. The development of appropriate service discovery mechanisms enriched with semantic capabilities for understanding and processing context information is a key feature for turning raw data into useful knowledge and ensuring the interoperability among different devices and applications. We assessed the performance of our solution through the implementation and deployment of a proof-of-concept prototype. Obtained results illustrate the potential of integrating semantic and ICN mechanisms to enable a flexible service discovery in IoT scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Feed-in-tariff (FIT) schemes have been widely employed to promote renewable energy deployment. While FITs may be perceived by consumers as an extra cost, renewable energies cause a noticeable price reduction in wholesale electricity markets. We analyse both effects for the case of the Spanish electricity market during 2010. In particular, we examine the level of FITs that makes savings and extra costs to be similar on an hourly basis. Results are obtained for a wide range of renewable generation scenarios. It is found that FITs with null extra costs for consumers are in the range of 50–80 €/MWh. Some of the side-effects of a high penetration of renewable energy in the market are analysed in detail and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a set of novel methods to biaxially package planar structures by folding and wrapping. The structure is divided into strips connected by folds that can slip during wrapping to accommodate material thickness. These packaging schemes are highly efficient, with theoretical packaging efficiencies approaching 100%. Packaging tests on meter-scale physical models have demonstrated packaging efficiencies of up to 83%. These methods avoid permanent deformation of the structure, allowing an initially flat structure to be deployed to a flat state.

Also presented are structural architectures and deployment schemes that are compatible with these packaging methods. These structural architectures use either in-plane pretension -- suitable for membrane structures -- or out-of-plane bending stiffness to resist loading. Physical models are constructed to realize these structural architectures. The deployment of these types of structures is shown to be controllable and repeatable by conducting experiments on lab-scale models.

These packaging methods, structural architectures, and deployment schemes are applicable to a variety of spacecraft structures such as solar power arrays, solar sails, antenna arrays, and drag sails; they have the potential to enable larger variants of these structures while reducing the packaging volume required. In this thesis, these methods are applied to the preliminary structural design of a space solar power satellite. This deployable spacecraft, measuring 60 m x 60 m, can be packaged into a cylinder measuring 1.5 m in height and 1 m in diameter. It can be deployed to a flat configuration, where it acts as a stiff lightweight support framework for multifunctional tiles that collect sunlight, generate electric power, and transmit it to a ground station on Earth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of the work described in this dissertation is the development of new wireless passive force monitoring platforms for applications in the medical field, specifically monitoring lower limb prosthetics. The developed sensors consist of stress sensitive, magnetically soft amorphous metallic glass materials. The first technology is based on magnetoelastic resonance. Specifically, when exposed to an AC excitation field along with a constant DC bias field, the magnetoelastic material mechanically vibrates, and may reaches resonance if the field frequency matches the mechanical resonant frequency of the material. The presented work illustrates that an applied loading pins portions of the strip, effectively decreasing the strip length, which results in an increase in the frequency of the resonance. The developed technology is deployed in a prototype lower limb prosthetic sleeve for monitoring forces experienced by the distal end of the residuum. This work also reports on the development of a magnetoharmonic force sensor comprised of the same material. According to the Villari effect, an applied loading to the material results in a change in the permeability of the magnetic sensor which is visualized as an increase in the higher-order harmonic fields of the material. Specifically, by applying a constant low frequency AC field and sweeping the applied DC biasing field, the higher-order harmonic components of the magnetic response can be visualized. This sensor technology was also instrumented onto a lower limb prosthetic for proof of deployment; however, the magnetoharmonic sensor illustrated complications with sensor positioning and a necessity to tailor the interface mechanics between the sensing material and the surface being monitored. The novelty of these two technologies is in their wireless passive nature which allows for long term monitoring over the life time of a given device. Additionally, the developed technologies are low cost. Recommendations for future works include improving the system for real-time monitoring, useful for data collection outside of a clinical setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Combinatorial optimization is a complex engineering subject. Although formulation often depends on the nature of problems that differs from their setup, design, constraints, and implications, establishing a unifying framework is essential. This dissertation investigates the unique features of three important optimization problems that can span from small-scale design automation to large-scale power system planning: (1) Feeder remote terminal unit (FRTU) planning strategy by considering the cybersecurity of secondary distribution network in electrical distribution grid, (2) physical-level synthesis for microfluidic lab-on-a-chip, and (3) discrete gate sizing in very-large-scale integration (VLSI) circuit. First, an optimization technique by cross entropy is proposed to handle FRTU deployment in primary network considering cybersecurity of secondary distribution network. While it is constrained by monetary budget on the number of deployed FRTUs, the proposed algorithm identi?es pivotal locations of a distribution feeder to install the FRTUs in different time horizons. Then, multi-scale optimization techniques are proposed for digital micro?uidic lab-on-a-chip physical level synthesis. The proposed techniques handle the variation-aware lab-on-a-chip placement and routing co-design while satisfying all constraints, and considering contamination and defect. Last, the first fully polynomial time approximation scheme (FPTAS) is proposed for the delay driven discrete gate sizing problem, which explores the theoretical view since the existing works are heuristics with no performance guarantee. The intellectual contribution of the proposed methods establishes a novel paradigm bridging the gaps between professional communities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless Sensor Networks (WSNs) are widely used for various civilian and military applications, and thus have attracted significant interest in recent years. This work investigates the important problem of optimal deployment of WSNs in terms of coverage and energy consumption. Five deployment algorithms are developed for maximal sensing range and minimal energy consumption in order to provide optimal sensing coverage and maximum lifetime. Also, all developed algorithms include self-healing capabilities in order to restore the operation of WSNs after a number of nodes have become inoperative. Two centralized optimization algorithms are developed, one based on Genetic Algorithms (GAs) and one based on Particle Swarm Optimization (PSO). Both optimization algorithms use powerful central nodes to calculate and obtain the global optimum outcomes. The GA is used to determine the optimal tradeoff between network coverage and overall distance travelled by fixed range sensors. The PSO algorithm is used to ensure 100% network coverage and minimize the energy consumed by mobile and range-adjustable sensors. Up to 30% - 90% energy savings can be provided in different scenarios by using the developed optimization algorithms thereby extending the lifetime of the sensor by 1.4 to 10 times. Three distributed optimization algorithms are also developed to relocate the sensors and optimize the coverage of networks with more stringent design and cost constraints. Each algorithm is cooperatively executed by all sensors to achieve better coverage. Two of our algorithms use the relative positions between sensors to optimize the coverage and energy savings. They provide 20% to 25% more energy savings than existing solutions. Our third algorithm is developed for networks without self-localization capabilities and supports the optimal deployment of such networks without requiring the use of expensive geolocation hardware or energy consuming localization algorithms. This is important for indoor monitoring applications since current localization algorithms cannot provide good accuracy for sensor relocation algorithms in such indoor environments. Also, no sensor redeployment algorithms, which can operate without self-localization systems, developed before our work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first topic analyzed in the thesis will be Neural Architecture Search (NAS). I will focus on two different tools that I developed, one to optimize the architecture of Temporal Convolutional Networks (TCNs), a convolutional model for time-series processing that has recently emerged, and one to optimize the data precision of tensors inside CNNs. The first NAS proposed explicitly targets the optimization of the most peculiar architectural parameters of TCNs, namely dilation, receptive field, and the number of features in each layer. Note that this is the first NAS that explicitly targets these networks. The second NAS proposed instead focuses on finding the most efficient data format for a target CNN, with the granularity of the layer filter. Note that applying these two NASes in sequence allows an "application designer" to minimize the structure of the neural network employed, minimizing the number of operations or the memory usage of the network. After that, the second topic described is the optimization of neural network deployment on edge devices. Importantly, exploiting edge platforms' scarce resources is critical for NN efficient execution on MCUs. To do so, I will introduce DORY (Deployment Oriented to memoRY) -- an automatic tool to deploy CNNs on low-cost MCUs. DORY, in different steps, can manage different levels of memory inside the MCU automatically, offload the computation workload (i.e., the different layers of a neural network) to dedicated hardware accelerators, and automatically generates ANSI C code that orchestrates off- and on-chip transfers with the computation phases. On top of this, I will introduce two optimized computation libraries that DORY can exploit to deploy TCNs and Transformers on edge efficiently. I conclude the thesis with two different applications on bio-signal analysis, i.e., heart rate tracking and sEMG-based gesture recognition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

LHC experiments produce an enormous amount of data, estimated of the order of a few PetaBytes per year. Data management takes place using the Worldwide LHC Computing Grid (WLCG) grid infrastructure, both for storage and processing operations. However, in recent years, many more resources are available on High Performance Computing (HPC) farms, which generally have many computing nodes with a high number of processors. Large collaborations are working to use these resources in the most efficient way, compatibly with the constraints imposed by computing models (data distributed on the Grid, authentication, software dependencies, etc.). The aim of this thesis project is to develop a software framework that allows users to process a typical data analysis workflow of the ATLAS experiment on HPC systems. The developed analysis framework shall be deployed on the computing resources of the Open Physics Hub project and on the CINECA Marconi100 cluster, in view of the switch-on of the Leonardo supercomputer, foreseen in 2023.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software Defined Networking along with Network Function Virtualisation have brought an evolution in the telecommunications laying out the bases for 5G networks and its softwarisation. The separation between the data plane and the control plane, along with having a decentralisation of the latter, have allowed to have a better scalability and reliability while reducing the latency. A lot of effort has been put into creating a distributed controller, but most of the solutions provided by now have a monolithic approach that reduces the benefits of having a software defined network. Disaggregating the controller and handling it as microservices is the solution to problems faced when working with a monolithic approach. Microservices enable the cloud native approach which is essential to benefit from the architecture of the 5G Core defined by the 3GPP standards development organisation. Applying the concept of NFV allows to have a softwarised version of the entire network structure. The expectation is that the 5G Core will be deployed on an orchestrated cloud infrastructure and in this thesis work we aim to provide an application of this concept by using Kubernetes as an implementation of the MANO standard. This means Kubernetes acts as a Network Function Virtualisation Orchestrator (NFVO), Virtualised Network Function Manager (VNFM) and Virtualised Infrastructure Manager (VIM) rather than just a Network Function Virtualisation Infrastructure. While OSM has been adopted for this purpose in various scenarios, this work proposes Kubernetes opposed to OSM as the MANO standard implementation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A rapid, sensitive and specific method for quantifying propylthiouracil in human plasma using methylthiouracil as the internal standard (IS) is described. The analyte and the IS were extracted from plasma by liquid-liquid extraction using an organic solvent (ethyl acetate). The extracts were analyzed by high performance liquid chromatography coupled with electrospray tandem mass spectrometry (HPLC-MS/MS) in negative mode (ES-). Chromatography was performed using a Phenomenex Gemini C18 5μm analytical column (4.6mm×150mm i.d.) and a mobile phase consisting of methanol/water/acetonitrile (40/40/20, v/v/v)+0.1% of formic acid. For propylthiouracil and I.S., the optimized parameters of the declustering potential, collision energy and collision exit potential were -60 (V), -26 (eV) and -5 (V), respectively. The method had a chromatographic run time of 2.5min and a linear calibration curve over the range 20-5000ng/mL. The limit of quantification was 20ng/mL. The stability tests indicated no significant degradation. This HPLC-MS/MS procedure was used to assess the bioequivalence of two propylthiouracil 100mg tablet formulations in healthy volunteers of both sexes in fasted and fed state. The geometric mean and 90% confidence interval CI of Test/Reference percent ratios were, without and with food, respectively: 109.28% (103.63-115.25%) and 115.60% (109.03-122.58%) for Cmax, 103.31% (100.74-105.96%) and 103.40% (101.03-105.84) for AUClast. This method offers advantages over those previously reported, in terms of both a simple liquid-liquid extraction without clean-up procedures, as well as a faster run time (2.5min). The LOQ of 20ng/mL is well suited for pharmacokinetic studies. The assay performance results indicate that the method is precise and accurate enough for the routine determination of the propylthiouracil in human plasma. The test formulation with and without food was bioequivalent to reference formulation. Food administration increased the Tmax and decreased the bioavailability (Cmax and AUC).