994 resultados para application deployment
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
RESUMEN DEL PROYECTO Organizar actividades y proyectos entre varias personas o tomar decisiones conjuntas son cuestiones a las que se enfrenta cualquier individuo en su día a día. El simple hecho de coordinar o poner de acuerdo a un grupo reducido de personas puede llegar a suponer un gran problema ya que cada participante tiene sus propias preferencias y, en ocasiones, es difícil conseguir encajarlas con las demás del grupo. Este proyecto, llamado “DealtDay”, surge para facilitar esta labor. La idea nace ante la necesidad de organizar, de forma fácil e intuitiva, a un grupo de personas para, por ejemplo, concretar una reunión, quedar para ir a dar una vuelta, decidir qué película ver, etc. Este proyecto se ha desarrollado basándose en el sistema actual de relaciones con el que se han creado la mayoría de las redes sociales que hoy conocemos. Como medio para poder hacer uso del proyecto se ha construido una aplicación web que, gracias a las decisiones de diseño tomadas, se puede usar tanto en un ordenador, una tablet o un Smartphone. Este punto se considera fundamental ya que cada vez más personas están dejando de lado los ordenadores corrientes para dar paso al uso de las nuevas tecnologías. Además, se ha creado una API REST, lo que nos permite utilizar todas las funcionalidades de la aplicación desde cualquier sistema que pueda realizar peticiones http. En este proyecto en concreto se realizará la parte del desarrollo de la API, el cliente web y el despliegue de la aplicación en un servidor web para realizar las pruebas pertinentes. ABSTRACT To organize activities and projects between several people or make joint decisions are issues to which any person faces every day.The simple fact of coordinate or coming to an agreement with a group of persons could be a major problem, since each participant has their own preferences and often fails when tries to fit them with the group. This project, called “Dealt Day”, is born to facilitate this task. The idea arises of how to achieve organize a group of people in an easily and intuitively way in order to arrange a meeting, be able to go for a walk, decide what movie to see or simply vote a choice between a users group. This project has been developed based on the current relation system that has been created in the most social networks we know. As a means of making use of the project a web application has been built, that thanks to the design decisions taken it can be used in a computer, tablet or smartphone, This is an essential point because more and more people are abandoning the current computers to make way for the use of new technologies. Also, a REST API has been created, which allows us to use all the features of the application from any system able to make http requests. In this particular project, I have done the development of the API, web client and the application deployment on a web server in order to test it.
Resumo:
Cloud Computing is a paradigm that enables the access, in a simple and pervasive way, through the network, to shared and configurable computing resources. Such resources can be offered on demand to users in a pay-per-use model. With the advance of this paradigm, a single service offered by a cloud platform might not be enough to meet all the requirements of clients. Ergo, it is needed to compose services provided by different cloud platforms. However, current cloud platforms are not implemented using common standards, each one has its own APIs and development tools, which is a barrier for composing different services. In this context, the Cloud Integrator, a service-oriented middleware platform, provides an environment to facilitate the development and execution of multi-cloud applications. The applications are compositions of services, from different cloud platforms and, represented by abstract workflows. However, Cloud Integrator has some limitations, such as: (i) applications are locally executed; (ii) users cannot specify the application in terms of its inputs and outputs, and; (iii) experienced users cannot directly determine the concrete Web services that will perform the workflow. In order to deal with such limitations, this work proposes Cloud Stratus, a middleware platform that extends Cloud Integrator and offers different ways to specify an application: as an abstract workflow or a complete/partial execution flow. The platform enables the application deployment in cloud virtual machines, so that several users can access it through the Internet. It also supports the access and management of virtual machines in different cloud platforms and provides services monitoring mechanisms and assessment of QoS parameters. Cloud Stratus was validated through a case study that consists of an application that uses different services provided by different cloud platforms. Cloud Stratus was also evaluated through computing experiments that analyze the performance of its processes.
Resumo:
This paper presents the work in progress of an on-demand software deployment system based on application virtualization concepts which eliminates the need of software installation and configuration on each computer. Some mechanisms were created, such as mapping of utilization of resources by the application to improve the software distribution and startup; a virtualization middleware which give all resources needed for the software execution; an asynchronous P2P transport used to optimizing distribution on the network; and off-line support where the user can execute the application even when the server is not available or when is out of the network. © Springer-Verlag Berlin Heidelberg 2010.
Resumo:
Federal Transit Administration, Washington, D.C.
Resumo:
AIM: This work presents detailed experimental performance results from tests executed in the hospital environment for Health Monitoring for All (HM4All), a remote vital signs monitoring system based on a ZigBee® (ZigBee Alliance, San Ramon, CA) body sensor network (BSN). MATERIALS AND METHODS: Tests involved the use of six electrocardiogram (ECG) sensors operating in two different modes: the ECG mode involved the transmission of ECG waveform data and heart rate (HR) values to the ZigBee coordinator, whereas the HR mode included only the transmission of HR values. In the absence of hidden nodes, a non-beacon-enabled star network composed of sensing devices working on ECG mode kept the delivery ratio (DR) at 100%. RESULTS: When the network topology was changed to a 2-hop tree, the performance degraded slightly, resulting in an average DR of 98.56%. Although these performance outcomes may seem satisfactory, further investigation demonstrated that individual sensing devices went through transitory periods with low DR. Other tests have shown that ZigBee BSNs are highly susceptible to collisions owing to hidden nodes. Nevertheless, these tests have also shown that these networks can achieve high reliability if the amount of traffic is kept low. Contrary to what is typically shown in scientific articles and in manufacturers' documentation, the test outcomes presented in this article include temporal graphs of the DR achieved by each wireless sensor device. CONCLUSIONS: The test procedure and the approach used to represent its outcomes, which allow the identification of undesirable transitory periods of low reliability due to contention between devices, constitute the main contribution of this work.
Resumo:
The advent of Wireless Sensor Network (WSN) technologies is paving the way for a panoply of new ubiquitous computing applications, some of them with critical requirements. In the ART-WiSe framework, we are designing a two-tiered communication architecture for supporting real-time and reliable communications in WSNs. Within this context, we have been developing a test-bed application, for testing, validating and demonstrating our theoretical findings - a search&rescue/pursuit-evasion application. Basically, a WSN deployment is used to detect, localize and track a target robot and a station controls a rescuer/pursuer robot until it gets close enough to the target robot. This paper describes how this application was engineered, particularly focusing on the implementation of the localization mechanism.
Resumo:
Research Project submited as partial fulfilment for the Master Degree in Statistics and Information Management
Resumo:
OutSystems Platform is used to develop, deploy, and maintain enterprise web an mobile web applications. Applications are developed through a visual domain specific language, in an integrated development environment, and compiled to a standard stack of web technologies. In the platform’s core, there is a compiler and a deployment service that transform the visual model into a running web application. As applications grow, compilation and deployment times increase as well, impacting the developer’s productivity. In the previous model, a full application was the only compilation and deployment unit. When the developer published an application, even if he only changed a very small aspect of it, the application would be fully compiled and deployed. Our goal is to reduce compilation and deployment times for the most common use case, in which the developer performs small changes to an application before compiling and deploying it. We modified the OutSystems Platform to support a new incremental compilation and deployment model that reuses previous computations as much as possible in order to improve performance. In our approach, the full application is broken down into smaller compilation and deployment units, increasing what can be cached and reused. We also observed that this finer model would benefit from a parallel execution model. Hereby, we created a task driven Scheduler that executes compilation and deployment tasks in parallel. Our benchmarks show a substantial improvement of the compilation and deployment process times for the aforementioned development scenario.
Resumo:
This paper presents a mobile information system denominated as Vehicle-to-Anything Application (V2Anything App), and explains its conceptual aspects. This application is aimed at giving relevant information to Full Electric Vehicle (FEV) drivers, by supporting the integration of several sources of data in a mobile application, thus contributing to the deployment of the electric mobility process. The V2Anything App provides recommendations to the drivers about the FEV range autonomy, location of battery charging stations, information of the electricity market, and also a route planner taking into account public transportations and car or bike sharing systems. The main contributions of this application are related with the creation of an Information and Communication Technology (ICT) platform, recommender systems, data integration systems, driver profile, and personalized range prediction. Thus, it is possible to deliver relevant information to the FEV drivers related with the electric mobility process, electricity market, public transportation, and the FEV performance.
Resumo:
This note describes ParallelKnoppix, a bootable CD that allows creation of a Linux cluster in very little time. An experienced user can create a cluster ready to execute MPI programs in less than 10 minutes. The computers used may be heterogeneous machines, of the IA-32 architecture. When the cluster is shut down, all machines except one are in their original state, and the last can be returned to its original state by deleting a directory. The system thus provides a means of using non-dedicated computers to create a cluster. An example session is documented.
Resumo:
A previous study sponsored by the Smart Work Zone Deployment Initiative, “Feasibility of Visualization and Simulation Applications to Improve Work Zone Safety and Mobility,” demonstrated the feasibility of combining readily available, inexpensive software programs, such as SketchUp and Google Earth, with standard two-dimensional civil engineering design programs, such as MicroStation, to create animations of construction work zones. The animations reflect changes in work zone configurations as the project progresses, representing an opportunity to visually present complex information to drivers, construction workers, agency personnel, and the general public. The purpose of this study is to continue the work from the previous study to determine the added value and resource demands created by including more complex data, specifically traffic volume, movement, and vehicle type. This report describes the changes that were made to the simulation, including incorporating additional data and converting the simulation from a desktop application to a web application.
Resumo:
Global warming is one of the most alarming problems of this century. Initial scepticism concerning its validity is currently dwarfed by the intensification of extreme weather events whilst the gradual arising level of anthropogenic CO2 is pointed out as its main driver. Most of the greenhouse gas (GHG) emissions come from large point sources (heat and power production and industrial processes) and the continued use of fossil fuels requires quick and effective measures to meet the world’s energy demand whilst (at least) stabilizing CO2 atmospheric levels. The framework known as Carbon Capture and Storage (CCS) – or Carbon Capture Utilization and Storage (CCUS) – comprises a portfolio of technologies applicable to large‐scale GHG sources for preventing CO2 from entering the atmosphere. Amongst them, CO2 capture and mineralisation (CCM) presents the highest potential for CO2 sequestration as the predicted carbon storage capacity (as mineral carbonates) far exceeds the estimated levels of the worldwide identified fossil fuel reserves. The work presented in this thesis aims at taking a step forward to the deployment of an energy/cost effective process for simultaneous capture and storage of CO2 in the form of thermodynamically stable and environmentally friendly solid carbonates. R&D work on the process considered here began in 2007 at Åbo Akademi University in Finland. It involves the processing of magnesium silicate minerals with recyclable ammonium salts for extraction of magnesium at ambient pressure and 400‐440⁰C, followed by aqueous precipitation of magnesium in the form of hydroxide, Mg(OH)2, and finally Mg(OH)2 carbonation in a pressurised fluidized bed reactor at ~510⁰C and ~20 bar PCO2 to produce high purity MgCO3. Rock material taken from the Hitura nickel mine, Finland, and serpentinite collected from Bragança, Portugal, were tested for magnesium extraction with both ammonium sulphate and bisulphate (AS and ABS) for determination of optimal operation parameters, primarily: reaction time, reactor type and presence of moisture. Typical efficiencies range from 50 to 80% of magnesium extraction at 350‐450⁰C. In general ABS performs better than AS showing comparable efficiencies at lower temperature and reaction times. The best experimental results so far obtained include 80% magnesium extraction with ABS at 450⁰C in a laboratory scale rotary kiln and 70% Mg(OH)2 carbonation in the PFB at 500⁰C, 20 bar CO2 pressure for 15 minutes. The extraction reaction with ammonium salts is not at all selective towards magnesium. Other elements like iron, nickel, chromium, copper, etc., are also co‐extracted. Their separation, recovery and valorisation are addressed as well and found to be of great importance. The assessment of the exergetic performance of the process was carried out using Aspen Plus® software and pinch analysis technology. The choice of fluxing agent and its recovery method have a decisive sway in the performance of the process: AS is recovered by crystallisation and in general the whole process requires more exergy (2.48–5.09 GJ/tCO2sequestered) than ABS (2.48–4.47 GJ/tCO2sequestered) when ABS is recovered by thermal decomposition. However, the corrosive nature of molten ABS and operational problems inherent to thermal regeneration of ABS prohibit this route. Regeneration of ABS through addition of H2SO4 to AS (followed by crystallisation) results in an overall negative exergy balance (mainly at the expense of low grade heat) but will flood the system with sulphates. Although the ÅA route is still energy intensive, its performance is comparable to conventional CO2 capture methods using alkanolamine solvents. An energy‐neutral process is dependent on the availability and quality of nearby waste heat and economic viability might be achieved with: magnesium extraction and carbonation levels ≥ 90%, the processing of CO2‐containing flue gases (eliminating the expensive capture step) and production of marketable products.
Resumo:
This paper presents a study on applying an integrated Global Position System (GPS) and Geographacial Information System (GIS) technology to the reduction of construction waste. During the study, a prototype study is developed from automatic data capture system such as the barcoding system for construction material and equipment (M&E) management onsite, whilst the integrated GPS and GIS technology is combined to the M&E system based on the Wide Area Network (WAN). Then, a case study is conducted to demonstrate the deployment of the system. Experimental results indicate that the proposed system can minimize the amount of onsite material wastage.